text
stringlengths
49
10.4k
source
dict
ros, kinect, model Bus 001 Device 003: ID 045e:078c Microsoft Corp. Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x045e Microsoft Corp. idProduct 0x078c bcdDevice 1.11 iManufacturer 1 iProduct 2 iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 34 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 4 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 70mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 5 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 65 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 24
{ "domain": "robotics.stackexchange", "id": 13948, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, kinect, model", "url": null }
• Oh, this makes a lot of sense! Thank you!! – Idaisa Mar 24 '18 at 15:37 • You are welcome :) – TheSimpliFire Mar 24 '18 at 15:38 When you expand something squared, you multiply each term by each other term, so in this case you have \begin{aligned}(n^2+3n+1)^2&=n^2\cdot n^2 + n^2\cdot 3n+n^2\cdot 1\\&\;+3n\cdot n^2+3n\cdot3n+3n\cdot1\\&\;+1\cdot n^2+1\cdot3n+1\cdot1\\&=n^4+3n^3+n^2+3n^3+9n^2+3n+n^2+3n+1\\&=n^4+6n^3+11n^2+6n+1\end{aligned} • Oh this is a neat method, thank you! – Idaisa Mar 24 '18 at 15:38 HINT Let expand $$(a+b+c)^2=a^2+b^2+c^2+2ab+2bc+2ca$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9752018354801187, "lm_q1q2_score": 0.8754340912266927, "lm_q2_score": 0.8976952866333484, "openwebmath_perplexity": 487.5758791971208, "openwebmath_score": 0.9685267210006714, "tags": null, "url": "https://math.stackexchange.com/questions/2704997/how-does-n4-6n3-11n2-6n-1-n2-3n-12" }
c, strings, windows, winapi Title: Win32/C: Notepad wrapper that automatically converts Unix line endings to Windows line endings At my work we're dealing with a lot of PHP files written by a third-party company and sometimes we want to "quickly look at" these files in Notepad rather than having to open up a full IDE. The problem is that these files have \n line endings which as many of you know Notepad doesn't handle correctly. Sure, there are solutions like installing Notepad++ but everyone knows that the first rule of engineering is that you always reinvent the wheel every chance you get. I'm kidding, I just really like coding. My solution: write a wrapper for Notepad that silently converts files containing Unix line endings into Windows/DOS style line endings and then opens the file in Notepad. Program Source Code: #include <Windows.h> #include <strsafe.h> // Defines for the line-ending conversion function #define LESTATUS INT #define LE_NO_CHANGES_NEEDED (0) #define LE_CHANGES_SUCCEEDED (1) #define LE_CHANGES_FAILED (-1) LESTATUS WINAPI ConvertLineEndings(BYTE *inData, INT inLen, BYTE *outData, INT outLen, INT *bytesWritten) { INT sourceIndex = 0, destIndex; // Fail immediately; no chance of success here. if (outLen < inLen) return LE_CHANGES_FAILED; // Try to determine if changes are needed while (sourceIndex < inLen) { // If an \r is immediately followed by an \n, no changes are needed to inData. if (inData[sourceIndex] == '\r') { if (sourceIndex < inLen - 1 && inData[sourceIndex + 1] == '\n') { memcpy(outData, inData, inLen); *bytesWritten = inLen; return LE_NO_CHANGES_NEEDED; } // If we encountered an \r without a following \n then changes are needed. break; } // If we encounter an \n without a preceding \r then changes are needed. if (inData[sourceIndex] == '\n') break; sourceIndex++; }
{ "domain": "codereview.stackexchange", "id": 22952, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, strings, windows, winapi", "url": null }
+ n} \over 1 - t}\,\dd t -\int_{0}^{1}{1 - t^{n^{2}} \over 1 - t}\,\dd t}} \\[5mm]&=\lim_{n\ \to\ \infty}\bracks{% n + \pars{n - n^{2}}\pars{H_{n^{2} + n} - H_{n^{2}}}} \end{align} where $\ds{H_{m}}$ is a Harmonic Number.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9808759638081523, "lm_q1q2_score": 0.8109017319884521, "lm_q2_score": 0.8267117983401363, "openwebmath_perplexity": 836.9341012933176, "openwebmath_score": 0.998916745185852, "tags": null, "url": "https://math.stackexchange.com/questions/1075305/how-to-prove-lim-n-to-infty-sum-k-1n-fracnkn2k-frac32/1075828" }
ros, tf-tutorial, transform Title: [turtle_pointer-6] process has died when running tf tutorial I tried to follow the tutorial from here : http://www.ros.org/wiki/tf/Tutorials/Introduction%20to%20tf But there's some error and a process died. This is the full message : roslaunch turtle_tf turtle_tf_demo.launch ... logging to /home/albert/.ros/log/52b0e6d8-1099-11e2-aceb-5404a6dc3a5e/roslaunch-Albert-PC-11357.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://Albert-PC:50707/ SUMMARY ======== PARAMETERS * /rosdistro * /rosversion * /scale_angular * /scale_linear * /turtle1_tf_broadcaster/turtle * /turtle2_tf_broadcaster/turtle NODES / sim (turtlesim/turtlesim_node) teleop (turtlesim/turtle_teleop_key) turtle1_tf_broadcaster (turtle_tf/turtle_tf_broadcaster.py) turtle2_tf_broadcaster (turtle_tf/turtle_tf_broadcaster.py) turtle_pointer (turtle_tf/turtle_tf_listener.py) auto-starting new master Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[master]: started with pid [11373] ROS_MASTER_URI=http://localhost:11311
{ "domain": "robotics.stackexchange", "id": 11260, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, tf-tutorial, transform", "url": null }
quantum-information, decoherence, noise \end{align} Using the definition of the channel acting on a set of states, we find (using $c$ and $s$ as shorthand for the cosine and sine terms) \begin{align} \mathcal{E}^{\otimes n}(|GHZ\rangle\langle GHZ|)=\frac{1}{K}\left\{\mathcal{E}\left[\left(|0\rangle\langle 0|\right)\right]^{\otimes n}+\mathcal{E}\left[\left(c|0\rangle\langle 0|+s|0\rangle\langle 1|\right)\right]^{\otimes n}+\mathcal{E}\left[\left(c|0\rangle\langle 0|+s|1\rangle\langle 0|\right)\right]^{\otimes n}+\mathcal{E}\left[\left(c^2|0\rangle\langle 0|+s^2|1\rangle\langle 1|+cs|1\rangle\langle 0|+cs|0\rangle\langle 1|\right)\right]^{\otimes n}\right\}. \end{align} As before, we only need to worry about terms $|0\rangle\langle 1|$ and $|1\rangle\langle 0|$, so we find \begin{align}
{ "domain": "physics.stackexchange", "id": 93162, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-information, decoherence, noise", "url": null }
c++, parsing, json, lex, yacc %% JsonObject : JsonMap {LOG("JsonObject: JsonMap");} | JsonArray {LOG("JsonObject: JsonArray");} JsonMap : '{' JsonMapValueListOpt '}' {LOG("JsonMap: { JsonMapValueListOpt }");} JsonMapValueListOpt : {LOG("JsonMapValueListOpt: EMPTY");} | JsonMapValueList {LOG("JsonMapValueListOpt: JsonMapValueList");} JsonMapValueList : JsonMapValue {LOG("JsonMapValueList: JsonMapValue");} | JsonMapValueList ',' JsonMapValue {LOG("JsonMapValueList: JsonMapValueList , JsonMapValue");} JsonMapValue : JSON_STRING ':' JsonValue {LOG("JsonMapValue: JSON_STRING : JsonValue");} JsonArray : '[' JsonArrayValueListOpt ']' {LOG("JsonArray: [ JsonArrayValueListOpt ]");} JsonArrayValueListOpt : {LOG("JsonArrayValueListOpt: EMPTY");} | JsonArrayValueList {LOG("JsonArrayValueListOpt: JsonArrayValueList");} JsonArrayValueList : JsonValue {LOG("JsonArrayValueList: JsonValue");} | JsonArrayValueList ',' JsonValue {LOG("JsonArrayValueList: JsonArrayValueList , JsonValue");}
{ "domain": "codereview.stackexchange", "id": 8249, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, parsing, json, lex, yacc", "url": null }
c#, .net, classes Title: Undirected graph data structure in C# Description A class representing an undirected graph. At the moment, it supports integer values as vertices. An example of the type of graph represented is shown in the following diagram: It is represented internally as an array of adjacency lists. Each index of this array corresponds to a vertex number, and contains a List<int> of vertex numbers to which it is connected. Purpose Purely as an exercise in implementing a data structure. Implemented in C# after studying a Java-based implementation. Code The data structure class definition public class UndirectedGraph { // number of vertices private int _V; // number of edges private int _E = 0; // array of adjacency lists private List<int>[] _adj; /// <summary> /// Constructor to create a graph. /// </summary> /// <param name="v">Number of vertices</param> public UndirectedGraph(int v) { this._V = v; // create array of lists // initialise all lists to empty this._adj = new List<int>[v]; for (int i = 0; i < this._adj.Length; i++) this._adj[i] = new List<int>(); } // number of vertices public int V { get { return this._V; } } // number of edges public int E { get { return this._E; } } /// <summary> /// Add an edge to the graph. /// </summary> /// <param name="v">One vertex of the new edge.</param> /// <param name="w">The other vertex of the new edge.</param> public void addEdge(int v, int w) { // validate given node numbers if ((v > this._adj.Length) || (w > this._adj.Length)) throw new ArgumentException("Invalid node number specified.");
{ "domain": "codereview.stackexchange", "id": 39015, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, .net, classes", "url": null }
java, game, animation, libgdx Title: Creating and playing animations for a game using LibGDX It feels like there's quite a lot of code involved in order to manually build up an animation using the libGDX framework. In my specific case, I am creating a number of animations for a portrait view of a character. The character will do things like talk, blink, and laugh. There are a handful of different characters to worry about. I would like to get some feedback on my approach. I'm hoping to simplify things as much as I can, but this is the best that I have come up with so far. First, a texture atlas is created from a file. Then, the types from an enum are used to create a map of the types to the frames. I've removed all but one of the types just for brevity, but there is one for every single frame of animation. PortraitType.java public enum PortraitType { GOBLIN_TALK01("goblinTalkRight01", 106), GOBLIN_TALK02("goblinTalkRight02", 107), GOBLIN_TALK03("goblinTalkRight03", 108), GOBLIN_TALK04("goblinTalkRight04", 109), GOBLIN_TALK05("goblinTalkRight05", 110), GOBLIN_TALK06("goblinTalkRight06", 111), GOBLIN_TALK07("goblinTalkRight07", 112), GOBLIN_TALK08("goblinTalkRight08", 113), GOBLIN_TALK09("goblinTalkRight09", 114), GOBLIN_TALK10("goblinTalkRight10", 115), GOBLIN_TALK11("goblinTalkRight11", 116); public final String fileName; public final int id; private PortraitType(String fileName, int id) { this.fileName = fileName; this.id = id; } }
{ "domain": "codereview.stackexchange", "id": 23521, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, animation, libgdx", "url": null }
g. $T(n) = T(n-2) + n^2$ a-f can be solved directly by the master theorem (Theorem 4.1), which we restate below for ease of reference: Theorem 4.1 (Master theorem). Let $a \geq 1$ and $b > 1$ be constants, let $f(n)$ be a function, and let $T(n)$ be defined on the nonnegative integers by the recurrence where we interpret $n/b$ to mean either $\lfloor n/b \rfloor$ or $\lceil n/b \rceil$. Then $T(n)$ has the following asymptotic bounds: 1. If $f(n) = O(n^{\log_b a - \epsilon})$ for some constant $\epsilon > 0$, then $T(n) = \Theta(n^{\log_b a})$. 2. If $f(n) = \Theta(n^{\log_b a})$, then $T(n) = \Theta(n^{\log_b a} \log n)$. 3. If $f(n) = \Omega(n^{\log_b a + \epsilon})$ for some constant $\epsilon > 0$, and if $af(n/b) \leq cf(n)$ for some constant $% $ and all sufficiently large $n$, then $T(n) = \Theta(f(n))$. #### Problem 4-1a $a = b = 2$, so that $\log_b a = 1$. Since $f(n) = \Omega(n^{\log_b a + 3})$ and we see that #### Problem 4-1b $a = 1$ and $b = \frac{10}{7}$, so that $\log_b a = 0$. Since and we see that #### Problem 4-1c
{ "domain": "markhkim.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.962673113726775, "lm_q1q2_score": 0.8655599823803488, "lm_q2_score": 0.8991213840277783, "openwebmath_perplexity": 732.0085319858442, "openwebmath_score": 0.9267470836639404, "tags": null, "url": "https://markhkim.com/clrs/ch04/" }
java, programming-challenge, playing-cards, stream, rags-to-riches private static int lookupRank(char c) { switch (c) { case '2' : return 0; case '3' : return 1; case '4' : return 2; case '5' : return 3; case '6' : return 4; case '7' : return 5; case '8' : return 6; case '9' : return 7; case 'T' : return 8; case 'J' : return 9; case 'Q' : return 10; case 'K' : return 11; case 'A' : return 12; } throw new IllegalArgumentException("No such card '" + c + "'."); } private static final int[] REVERSE = { 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0 }; // These constants are carefully selected to ensure that // - STRAIGHT is > 3-of-a-kind // - STRAIGHT and FLUSH are less than 4-of-a-kind and full-house. // - STRAIGH + FLUSH (12) is better than others. private static final int STRAIGHT = 4; private static final int FLUSH = 8; // groups representing : // HIGH_CARD, 1_PAIR, 2_PAIR, 3_OF_A_KIND, FULL_HOUSE, 4_OF_A_KIND private static final int[] GROUPSCORE = { 0, 1, 2, 3, 9, 10 }; private static final int[] GROUPS = { groupHash(new int[]{ 1, 1, 1, 1, 1 }), groupHash(new int[]{ 1, 1, 1, 2 }), groupHash(new int[]{ 1, 2, 2 }), groupHash(new int[]{ 1, 1, 3 }), groupHash(new int[]{ 2, 3 }), groupHash(new int[]{ 1, 4 }) };
{ "domain": "codereview.stackexchange", "id": 12352, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, programming-challenge, playing-cards, stream, rags-to-riches", "url": null }
cc.complexity-theory, space-bounded, big-picture Title: Why do we consider log-space as a model of efficient computation (instead of polylog-space) ? This might be a subjective question rather than one with a concrete answer, but anyway. In complexity theory we study the notion of efficient computations. There are classes like $\mathsf{P}$ stands for polynomial time, and $\mathsf{L}$ stands for log space. Both of them are considered to be represented as a kind of "efficiency", and they capture the difficulties of some problems pretty well. But there is a difference between $\mathsf{P}$ and $\mathsf{L}$: while the polynomial time, $\mathsf{P}$, is defined as the union of problems which runs in $O(n^k)$ time for any constant $k$, that is, $\mathsf{P} = \bigcup_{k \geq 0} \mathsf{TIME[n^k]}$, the log space, $\mathsf{L}$, is defined as $\mathsf{SPACE[\log n]}$. If we mimics the definition of $\mathsf{P}$, it becomes $\mathsf{PolyL} = \bigcup_{k \geq 0} \mathsf{SPACE[\log^k n]}$, where $\mathsf{PolyL}$ is called the class of polylog space. My question is: Why do we use log space as the notion of efficient computation, instead of polylog space? One main issue may be about the complete problem sets. Under logspace many-one reductions, both $\mathsf{P}$ and $\mathsf{L}$ have complete problems. In contrast, if $\mathsf{PolyL}$ has complete problems under such reductions, then we would have contradict to the space hierarchy theorem. But what if we moved to the polylog reductions? Can we avoid such problems? In general, if we try our best to fit $\mathsf{PolyL}$ into the notion of efficiency, and (if needed) modify some of the definitions to get every good properties a "nice" class should have, how far can we go?
{ "domain": "cstheory.stackexchange", "id": 438, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cc.complexity-theory, space-bounded, big-picture", "url": null }
quantum-mechanics, quantum-field-theory, gauge-theory, quantum-electrodynamics, transport-phenomena \end{equation} the equations become: \begin{equation} \begin{cases} E_k^R{\psi_k}_R = \left( \left(k+\frac12\right)\frac{2\pi}{r}+e(A^0+A^1)\right) {\psi_k}_R \\ E_k^L{\psi_k}_L = \left(-\left(k+\frac12\right)\frac{2\pi}{r} 1+e(A^0-A^1)\right) {\psi_k}_L \end{cases} \end{equation}which give us the spectrum: \begin{equation} \begin{cases} E_k^R = eA_0+\frac{2\pi}{r} \left(k+\frac12\right)+eA^1\\ E_k^L = eA_0-\frac{2\pi}{r} \left(k+\frac12\right)-eA^1 \end{cases} \end{equation} so we see that the $A_0$ gives energy linearly to both components ($\psi_R$ and $\psi_L$) creating or destroying both equally, which is related to the $j_V$, while the $A_1$ give energy also linearly to one and takes from the other, converting left to right and vice-versa, related with $j_A$. Where we see the relation of the gauge freedom with the anomaly of the vector and axial currents/charges ($Q_V=|\psi_R|^2+|\psi_L|^2$ and $Q_A=|\psi_R|^2-|\psi_L|^2$). Which at the limit $r \xrightarrow[]{} \infty$, which would means going to the infinite normal Minkowsky 1+1d space-time again, doesn't make much sense to me.
{ "domain": "physics.stackexchange", "id": 89222, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, gauge-theory, quantum-electrodynamics, transport-phenomena", "url": null }
c++, object-oriented, c++11, tree void checkVowel(string toCheck, string output) { if (find(vowels.begin(), vowels.end(), toCheck.at(0)) != vowels.end()) cout << output << " an " << toCheck << "? "; else cout << output << " a " << toCheck << "? "; } First of all, I'd rewrite the constructors to use member initialization lists instead of assignment in the body of the ctor (where possible). For example, Node's ctor could become: Node(T& data) : data(data), lChild(nullptr), rChild(nullptr) { } Your destructor for Node currently does some pointless work: template<typename T> Node<T>::~Node() { delete lChild; delete rChild; lChild = nullptr; rChild = nullptr; } After the dtor runs, the object no longer exists, so setting its members to nullptr accomplishes nothing useful. This can be reduced to just: template<typename T> Node<T>::~Node() { delete lChild; delete rChild; } I'd got a little further than just putting Node and BinaryTree in the same file. I'd make Node a nested class inside the BinaryTree class. I'd also add a get to the Node class, so KnowledgeBase can use it instead of accessing Node's private data directly. template<typename T> struct BinaryTree { struct Node { friend class KnowledgeBase<T>; Node() : lChild(nullptr), rChild(nullptr) { } Node(T& data) : data(data), lChild(nullptr), rChild(nullptr) { } Node(const Node&);
{ "domain": "codereview.stackexchange", "id": 6394, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, object-oriented, c++11, tree", "url": null }
inorganic-chemistry, aqueous-solution, precipitation Title: What is the precipitation reaction? I asked my teacher if the reaction $$\ce{(NH4)2CO3 (aq) + MgBr2 (aq) <=> MgCO3 (s) + 2(NH4)Br (aq)}$$ can be considered a displacement reaction. She answered that it was a precipitation reaction instead. So what happened in this kind of reaction? In fact, it is just $$\ce{\ce{Mg^2+(aq) + CO3^2-(aq) -> MgCO3(s)}},$$ so the double displacement is very formal, as $\ce{NH4+}$ and $\ce{Br-}$ ions are just "spectator ions". There are no real $\ce{(NH4)2CO3(aq)}$ nor $\ce{MgBr2(aq)}$, it is just a way of inventory of particles. Precipitation reactions are, intuitively and not surprisingly, reactions forming precipitates.
{ "domain": "chemistry.stackexchange", "id": 17063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, aqueous-solution, precipitation", "url": null }
physical-chemistry, equilibrium, kinetics, stoichiometry I might try to give you some intuition to back up that, for a given (elementary) reaction $\ce{A ->[k] B}$, the reaction rate $r$ can be written as $$r = \frac{d P_\ce{B}}{dt} = -\frac{d P_\ce{A}}{dt} \propto P_\ce{A}\text{.}$$ (Observe that the second equality above is true due to $P_\ce{A} + P_\ce{B} = \text{constant}$.) First, for an ideal gas, $P_\ce{A} = \frac{n_\ce{A}}{V} RT$. This means that $P_\ce{A} \propto n$ for fixed temperature and volume. Let's say the reaction happens as a random process. That is to say that, for every time interval $\Delta t$, we have a probability per unit time $p$ of having a single molecule $\ce{A}$ turning into $\ce{B}$. If we wait longer, proportionally more molecules will turn. We'll thus have, for initially $n_\ce{A}$ molecules of $\ce{A}$, after $\Delta t$ seconds, $$\Delta n_\ce{B} = p n_\ce{A} \Delta t\text{.}$$ This means that, in the time interval $\Delta t$, the population of $\ce{B}$ goes from 0 to $\Delta n_\ce{B}$ (assuming no $\ce{B}$ initially). From the stoichiometry of the reaction, $\Delta n_\ce{B} = -\Delta n_\ce{A}$ (i.e., there's conservation of moles). Thus, $$\frac{\Delta n_\ce{B}}{\Delta t} = -\frac{\Delta n_\ce{A}}{\Delta t} = p n_\ce{A}\text{.}$$
{ "domain": "chemistry.stackexchange", "id": 8712, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "physical-chemistry, equilibrium, kinetics, stoichiometry", "url": null }
genetics, virology, infection, coronavirus 4. The claim that SARS-CoV-2 contains four insertions from HIV-1: The paper claiming this has now been retracted due to severe criticism, and additionally a renowned HIV expert published an analysis (reference 5) demonstrating that the HIV-1 claimed insertions are random rather than targeted. 5. The claim that the SARS-CoV-2 virus is completely man-made: To design such a "weapon grade" virus in the lab, the design would usually start from a known virus backbone and then introduce logical changes (for example, complete genes from other viruses). This cannot be seen in the genome of the virus; rather, you see randomly distributed changes throughout the genome coming from virus evolution and not directed cloning. It is more likely that this virus originates from the recombination of a bat CoV (to which it is closely, but not directly related) and another, not yet known CoV in an intermediate host, like the palm civet for the 2003 CoV. References:
{ "domain": "biology.stackexchange", "id": 10635, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, virology, infection, coronavirus", "url": null }
python, clustering, k-means, unsupervised-learning print('Creating cluster.csv') with open('cluster.csv', 'w') as output: writer = csv.DictWriter(output, csvRows[0].keys()) writer.writeheader() writer.writerows(csvRows) print("\ncreated cluster.csv") The results are not very satisfactory. They are very average. What could be done to improve my clustering algorithm? I would still want to use K-Means but what another approach could be used in place of Tf-Idf? Also, if you guys think that there is a better alternative to K-Means, please suggest and it even more helpful, if you could point me to sources/examples, where people have already done similar stuff. I will always run the clustering on the volume close to 40 Million. You will likely see an improvement by using an algorithm like GloVe in place of Tf-Idf. Like Tf-Idf, GloVe represents a group of words as a vector. Unlike Tf-Idf, which is a Bag-of-Words approach, GloVe and similar techniques preserve the order of words in a tweet. Knowing what word comes before or after a word of interest is valuable information for assigning meaning. This Article runs through different techniques and gives a good description of each one. Also, This Script on Kaggle shows how to use pretrained word vectors to represent tweets. For your clustering, I recommend checking out Density-Based clustering. K-means is a decent all-purpose algorithm, but it's a partitional method and depends on assumptions that might not be true, such as clusters being roughly equal in size. This is almost certainly not the case. This Blog has a great discussion on clustering for text. If you go with Density-Based and you use Python, I highly recommend HDBSCAN by Leland McInnes. Good luck!
{ "domain": "datascience.stackexchange", "id": 7253, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, clustering, k-means, unsupervised-learning", "url": null }
'' in other languages, however, Haskell String- > Int type Conversion manifested... In group: comp.lang.haskell: Paul Rubin wrote: > the efficiency hit for integer is doing all,... No stack overflow for Teams is a functional language, which means that the result of any function call fully! This technique can be used with newtype with the program mask bits the... > Int type Conversion [ duplicate ] f will be 'Int ' which unbounded. One reference stated that overflows are considered programmer errors and are up to 256 bits the value is tagged. We 're going to do the good old hello, world '' schtick numbers can be with. Values, and networking resources, powering almost all of Haskell.org in several around... Two real numbers can be used as an alternate option of Pattern Matching to calcul… numbers... Conversion behavior manifested itself in Haskell to do is convert, e.g and displayed and... Between cost functions to play a major role, and networking resources, powering almost all of in... Haskell supports both fixed sized signed integers ( integer ) technique can be stored a stackoverflow explaining... Fixity declaration must be in the movie Superman 2 to learn the rest of the standard library that! Of tests Game I implemented in Haskell GenericNumber = integer integer | Rational |... Quantified in some way over all types most common beginners ' program in any language displays... Yes, these are the type system, the return value is '. Use a protractor if you ca n't see what type of things think... Implemented into any type to some other type automatically - programmer have to checklist! Was originally reported as bug # 1013 ( Closed ) happens when expression evaluates to an integer that,... Here is the altitude of a structure in a fixity declaration must be the! N'T see what type of things I think about way over all types Note: this function will the... 8 bits and displayed since Haskell is a pure language, quite different from most other programming languages too.. 6 or 271 respectively language simply displays a hello world '' schtick y > (. Not writing your code tail-recursively, then that is, 6 or 271 respectively new issue overflow...
{ "domain": "sgmapps.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9449947055100816, "lm_q1q2_score": 0.8259983671871879, "lm_q2_score": 0.87407724336544, "openwebmath_perplexity": 2616.821533305383, "openwebmath_score": 0.2643059492111206, "tags": null, "url": "https://demo.sgmapps.com/-oprk/c1bc31-haskell-int-overflow" }
It is possible to get an explicit form for $F(z)$, using Riemann sums. For each integer $n\ge1$, consider : $$S_n=\frac{2\pi}{n}\sum_{k=0}^{n-1}\ln\left|z-e^{2ik\pi/n}\right|$$which is the $n-$th Riemann sum attached to the previous integral (and a uniform subdivision of $[0,2\pi]$ with constant step $\frac{2\pi}{n}$). Now :$$S_n=\frac{2\pi}{n}\ln\left|\prod_{k=0}^{n-1}\left(z-e^{2ik\pi/n}\right)\right|=\frac{2\pi}{n}\ln\left|z^n-1\right|$$and you can easily show that :$$F(z)=\left\{\matrix{2\pi\ln\left|z\right|& \mathrm{ if}\left|z\right|>1\cr0 & \mathrm{otherwise}}\right.$$ When definite integrals are amenable to exact valuation, it is typically the case that the more expedient approach involves an anti-derivative rather than the limit of a Riemann sum. Often computation of the limit may be straightforward or even trivial, but somewhat tedious, as is the case for integrals of $f: x \mapsto x$ or $f: x \mapsto x^2$. On the other hand, integrals with simple integrands and easily recognized anti-derivatives such as $f: x\mapsto x^{-2}$ are more challenging with regard to the limit of Riemann sum -- and in that sense the Riemann sum may be "interesting." To make this more explicit, consider computing the integral $$\int_a^b x^{-2} \, dx = \lim_{n \to \infty}S_n$$ where
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9525741322079104, "lm_q1q2_score": 0.8495161368100846, "lm_q2_score": 0.8918110497511051, "openwebmath_perplexity": 231.74797514219324, "openwebmath_score": 0.9789736270904541, "tags": null, "url": "https://math.stackexchange.com/questions/1987358/calculate-an-integral-with-riemann-sum" }
condensed-matter, electronic-band-theory, topological-insulators, topological-phase, insulators $$f(n-m) \equiv \frac{1}{\sqrt{2\pi}} \int \mathrm dk \ e^{i[(n-m) k + \varphi(k)]}$$ Observe that if $w=0$ then $\varphi(k)=0$, and so $f(n-m)=\delta_{n,m}$. This implies that $\phi_n = \frac{1}{\sqrt{2}}|n\otimes\pmatrix{-1\\1}$, the Wannier states are exactly localized to our lattice sites, and the ground state wavefunction $\Psi$ is a product state. On the other hand, if $v=0$ then $\varphi(k)=-ik$, and so $f(n-m)=\delta_{n,m+1}$. This implies that $$\phi_n = \frac{1}{\sqrt{2}} \left[|n\rangle\otimes\pmatrix{-1\\0} + |n-1\rangle\otimes \pmatrix{0\\1}\right]$$ which means that the Wannier states are not localized to the lattice sites; there is (short-range) entanglement between neighboring sites, and so the ground state $\Psi$ is not a simple product state. Here is what $f(\Delta)$ looks like in general:
{ "domain": "physics.stackexchange", "id": 89687, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "condensed-matter, electronic-band-theory, topological-insulators, topological-phase, insulators", "url": null }
# Converting between polar and Cartesian coordinates The polar coordinates $r$ and $\varphi$ can be converted to the Cartesian coordinates x and y by using the [[trigonometric function]]s sine and cosine: $$x = r \cos \varphi \,$$ $$y = r \sin \varphi \,$$ The Cartesian coordinates ''$x$'' and ''y'' can be converted to polar coordinates ''r'' and ''$\varphi$'' with ''r'' ≥ 0 and ''$\varphi$'' in the interval $[0, 2\pi)$. $r = \sqrt{x^2 + y^2} \quad$ (as in the (Pythagorean theorem) or the (Euclidean norm), $$\varphi=\begin{cases} \arctan(\frac{y}{x}) & \mbox{if } x > 0 \mbox{and } y \geq 0\\ \arctan(\frac{y}{x}) + 2\pi & \mbox{if } x > 0 \mbox{ and } y < 0\\ \arctan(\frac{y}{x}) + \pi & \mbox{if } x < 0 \\ \frac{\pi}{2} & \mbox{if } x = 0 \mbox{ and } y > 0\\ -\frac{3\pi}{2} & \mbox{if } x = 0 \mbox{ and } y < 0\\ \text{undefined} & \mbox{if } x = 0 \mbox{ and } y = 0 \end{cases}$$ my question is $\varphi$ could be written in one equation by using Congruences modulo n as follows : $$\varphi=\arctan(\frac{y}{x}) [2\pi]$$ which means there exists some integer ”k” such that $$\varphi=\arctan(\frac{y}{x})+2\cdot k\cdot \pi$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.982287697148445, "lm_q1q2_score": 0.8076262697807662, "lm_q2_score": 0.8221891327004133, "openwebmath_perplexity": 390.0311286077669, "openwebmath_score": 0.9429933428764343, "tags": null, "url": "https://math.stackexchange.com/questions/714422/converting-between-polar-and-cartesian-coordinates" }
# New binomial coefficient identity? Is the following identity known? $$\sum\limits_{k=0}^n\frac{(-1)^k}{2k+1}\binom{n+k}{n-k}\binom{2k}{k}= \frac{1}{2n+1}$$ • It may appear in a different form. E.g., notice that $\binom{n+k}{n-k}\binom{2k}{k}=\binom{n+k}{n}\binom{n}{k}$. – Max Alekseyev Jan 30 '18 at 12:28 • known or not, Mathematica immediately evaluates it: link to Wolfram Alpha – Carlo Beenakker Jan 30 '18 at 12:49 • Can it be interpreted as an expected value? – Michael Hardy Jan 31 '18 at 0:17
{ "domain": "mathoverflow.net", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9719924818279466, "lm_q1q2_score": 0.840991293227502, "lm_q2_score": 0.865224072151174, "openwebmath_perplexity": 608.7220603449297, "openwebmath_score": 0.941927433013916, "tags": null, "url": "https://mathoverflow.net/questions/291738/new-binomial-coefficient-identity" }
ros, 3d-navigation, cmvision Title: Is there any online lectures or workshops available for ROS other than ROS/wiki? require details about practical books, lectures and workshops available online to learn ROS for visual navigation Originally posted by Kishore Kumar on ROS Answers with karma: 173 on 2015-01-17 Post score: 0 List of books dedicated for ROS http://wiki.ros.org/Books List of courses http://wiki.ros.org/Courses You may find even more useful materials that are available through the top page of ROS wiki http://wiki.ros.org/. Originally posted by 130s with karma: 10937 on 2015-01-17 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 20601, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, 3d-navigation, cmvision", "url": null }
classical-mechanics, lagrangian-formalism, coordinate-systems, constrained-dynamics Title: What are holonomic and non-holonomic constraints? I was reading Herbert Goldstein's Classical Mechanics. Its first chapter explains holonomic and non-holonomic constraints, but I still don’t understand the underlying concept. Can anyone explain it to me in detail and in simple language? If you have a mechanical system with $N$ particles, you'd technically need $n = 3N$ coordinates to describe it completely. But often it is possible to express one coordinate in terms of others: for example of two points are connected by a rigid rod, their relative distance does not vary. Such a condition of the system can be expressed as an equation that involves only the spatial coordinates $q_i$ of the system and the time $t$, but not on momenta $p_i$ or higher derivatives wrt time. These are called holonomic constraints: $$f(q_i, t) = 0.$$ The cool thing about them is that they reduce the degrees of freedom of the system. If you have $s$ constraints, you end up with $n' = 3N-s < n$ degrees of freedom. An example of a holonomic constraint can be seen in a mathematical pendulum. The swinging point on the pendulum has two degrees of freedom ($x$ and $y$). The length $l$ of the pendulum is constant, so that we can write the constraint as $$x^2 + y^2 - l^2 = 0.$$ This is an equation that only depends on the coordinates. Furthermore, it does not explicitly depend on time, and is therefore also a scleronomous constraint. With this constraint, the number of degrees of freedom is now 1.
{ "domain": "physics.stackexchange", "id": 49434, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics, lagrangian-formalism, coordinate-systems, constrained-dynamics", "url": null }
recommender-system Title: User based recommendation factoring in user data The question is: what algorithms (and libraries) should i use if i want to build a recommender system with the following data in mind representation: USER_ID ZIP Movie1 Movie2 Movie3 1 2483 5 0 3 2 2483 4 1 5 3 2345 3 1 5 Basically i want to factor in user data into a recommendation of a movie. (zips can be transformed to long/lat but thats another question and out of scope now) i was searching the internet for hours with no success. So i will be grateful if someone can point me in the right direction. Collaborative Filtering: Match users to people with similar tastes –recommend what they like Commonly used in e-retail Avoids the issue of users only being recommended more of what they already like (allows serendipity) Example: Your Scenario. Methods: Euclidean Distance, Cosine Distance, Pearson Correlation Coefficient (most common). Content-Based Recommendation: Match users directly to products and content Recommend based on what you have bought or viewed in the past Commonly used for document recommendation: webpages, news articles, blogs etc. Examples: Which Movie(s) might our user like? Option1: The user selects preferences for the various features using pull-down menus etc. We match against the movies using Vector-Space methods (described next) Methods: Vector-Space Method, K Nearest Neighbour(kNN) Option2: The user rates a sample of the movies (explicitly or implicitly) as like/dislike. we then build a user profile model for that user using machine learning. Method: Decision Trees
{ "domain": "datascience.stackexchange", "id": 2223, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "recommender-system", "url": null }
c++, primes return factors; } void Factors::display() const { FactorsList factors = calculate(); for (auto iter = factors.cbegin(); iter != factors.cend(); ++iter) { std::cout << iter->first << " x " << iter->second << "\n"; } } Primes.h #ifndef PRIMES_H #define PRIMES_H #include <cstdint> #include <map> class Primes { private: typedef std::map<std::uint64_t, std::uint64_t> PrimesList; std::uint64_t integer; PrimesList calculate() const; public: Primes(std::uint64_t); void display() const; }; #endif Primes.cpp #include "Primes.h" #include <iostream> Primes::Primes(std::uint64_t i) : integer(i) {} Primes::PrimesList Primes::calculate() const { std::uint64_t intCopy = integer; std::uint64_t divisor = 2; PrimesList primes; while (intCopy % divisor == 0) { intCopy /= divisor; primes[divisor]++; } for (divisor = 3; intCopy > 1; divisor += 2) { while (intCopy % divisor == 0) { intCopy /= divisor; primes[divisor]++; } } return primes; } void Primes::display() const { PrimesList primes = calculate();
{ "domain": "codereview.stackexchange", "id": 5149, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, primes", "url": null }
c, template, vectors, macros, c11 void (*shrink_to_fit)(struct _vector_##T*); \ void (*clear)(struct _vector_##T*); \ void (*erase1)(struct _vector_##T*, const T*); \ void (*erase2)(struct _vector_##T*, const T*, const T*); \ void (*push_back)(struct _vector_##T*, T); \ void (*pop_back)(struct _vector_##T*); \ void (*resize1)(struct _vector_##T*, size_t); \ void (*resize2)(struct _vector_##T*, size_t, T); \ } _vector_functions_##T; \ \ typedef struct _vector_##T \ { \ T* _data; \ size_t _size; \ size_t _capacity; \ const _vector_functions_##T* _functions; \ } Vector_##T; \ \ Vector_##T* new_Vector_##T(); \ void vector_delete_##T(Vector_##T*); \ T vector_at_##T(const Vector_##T*, size_t); \ T vector_front_##T(const Vector_##T*); \ T vector_back_##T(const Vector_##T*); \ T* vector_data_##T(const Vector_##T*); \ T* vector_begin_##T(Vector_##T*); \ const T* vector_cbegin_##T(const Vector_##T*); \ T* vector_end_##T(Vector_##T*); \ const T* vector_cend_##T(const Vector_##T*); \ bool vector_is_empty_##T(const Vector_##T*); \
{ "domain": "codereview.stackexchange", "id": 6892, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, template, vectors, macros, c11", "url": null }
html, css, layout .clients p { padding-top: 5px; font-size: 2rem; } .feedback { padding-top: 85px; overflow: auto; } .client_photo { float: left; width: 220px; height: 220px; background-color: #f9f9f9; border-radius: 100%; } .opinion { float: left; padding: 33px 51px 40px 45px; background-color: #f9f9f9; } .feedback div:first-child:after { content: ""; display: block; width: 0; height: 0; margin-top: 103px; margin-left: 243px; border-style: solid; border-width: 11px 20px 11px 0; border-color: transparent #f9f9f9 transparent transparent; } .feedback div:nth-child(2) { margin-top: 29px; margin-left: 43px; } .feedback div:nth-child(3) { clear: left; margin-top: 64px; margin-right: 43px; } .feedback div:last-child { margin-top: 40px; } .feedback div:last-child:before { content: ""; display: block; width: 0; height: 0; margin-top: 100px; margin-left: -43px; border-style: solid; border-width: 9.5px 0 9.5px 20px; border-color: transparent transparent transparent #f9f9f9; } blockquote { padding-bottom: 10px; font-size: 2.2rem; line-height: 33px; } blockquote:before { content: open-quote; } blockquote:after { content: close-quote; } cite { padding-left: 90px; font-size: 1.8rem; font-style: normal; } .next_project { padding-top: 70px; padding-bottom: 70px; color: #fff; } h4 { font-size: 4rem; }
{ "domain": "codereview.stackexchange", "id": 21787, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "html, css, layout", "url": null }
newtonian-mechanics, forces, torque Title: Question about torque reaction of a helicopter This image below about the torque reaction of a helicopter has confused me a bit. I would like to ask whether i understand the principles here correctly. In the picture we see a helicopter whose main propeller is spinning Counter Clock Wise (CCW), as is indicated by the up most arrow. We know that the torque is $ \vec{T} = \vec{r}\times\vec{F} $, so according to the right hand rule i want to calculate the direction of the torque of the main propeller here, or better yet to understand why it is in the direction illustrated. We know that the main propeller is producing a force downwards to the air. And then the air is countering this force, by producing a lift force that lifts the helicopter upwards. Now, my problem is which one of the two forces should i consider when i calculate the Torque with the above equation? I believe i should take into account the force-action (from the helicopter to the air) and not the force-reaction (from the air to the helicopter). If i consider the force-action-dowwards using the right hand rule (i know how to do that correctly, i'm sure) then the direction of the torque should be that of the image below. However the direction that the main rotor is spinning confuses me whether i'm doing the right thing. So finally in order to compensate for this torque, the tail rotor must produce the torque reaction in the opposite direction of the main torque. My main question is whether my reasoning about the direction of the main torque is correct. Thanks in advance. If, as shown in the diagram, the main rotor is moving in the counter-clockwise direction, then the body of the helicopter will try to twist in the opposite direction (i.e., it will want to turn clockwise). The tail rotor thus needs to provide a thrust which gives a counter-clockwise torque in order to counter and cancel the clockwise torque on the helicopter body from the main rotor. In order to do this, the tail rotor needs to blow air towards the left in the diagram shown, which then results in a thrust on the helicopter tail to the right, as shown in the diagram.
{ "domain": "physics.stackexchange", "id": 24964, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics, forces, torque", "url": null }
fasta, text-processing file2: >AOLJ01000027.1:50569-51417 Haloferax gibbonsii ATCC 33959 contig_27, whole genome shotgun sequence TCAGTCGTCGAAGGTCGGTTTCCGGGCTTCGATGGTGGCCGACACGAGGTACTCGCCGAGGTCGCGGTCG GCCTCCCAGTCGCTGATGAACTCAGTACTCTCGTCCTTGGGTGCAATCTCGACCGCCTCGAAGCCAGCGC >AM774418.1:280304-281149 Halobacterium salinarum R1 plasmid PHS3 complete genome ATGAGTAATGACAACGAGACGATGGTCGCCGATCGCGATCCCGAGGAGACTCGCGAGATGGTGCGGGAAC GCTACGCGGGAATCGCGACGAGCGGCCAGGACTGCTGTGGTGACGTCGGTTTGGATGTCTCTGGCGACGG >AOJK01000067.1:53467-54312 Halorubrum californiensis DSM 19288 contig_67, whole genome shotgun sequence TCAGTCGTCGCGGGCTGGTTTTCGGGCTTCGATGGTAGCGGAAACGAGGTACTCGCCGAGGTCGCGGTCG GCGTCCCAGTCGCTGATGAACTCGGTGCTCTCGTCCTTCGGCGCGATCTCGACCGCCTCGAAGCCCGCCT Expected output: >AOIT01000069.1:1403-2242 Haloterrigena limicola JCM 13563 contig_69, whole genome shotgun sequence ATGAGTAACGATACGGTAGCTGGTGACCGCGACCCCGAGGAGGCCCGTGAGATGGTGCGCGAACGCTACG GGACGATCGCTTCGGACGGTCAGGACTGCTGTGGCGATGTCGGCATCGATGTCACCGACGACGGTGGGTG
{ "domain": "bioinformatics.stackexchange", "id": 1028, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fasta, text-processing", "url": null }
1. Use a straightedge and a Sharpie or thin marker to draw a line near the edge of a piece of paper. 2. Place a point F roughly above the middle of the line toward the center of the paper. 3. Fold the paper over so point F is on the line from step 1 and crease the paper along the fold. 4. Open the paper back up and repeat step 3 several more times with F touching other parts of the step 1 line. 5. All of the creases from steps 3 & 4 outline a curve.  Trace that curve to see a parabola. This procedure works because you can fold the focus onto the directrix anywhere you like and the resulting crease will be tangent to the parabola defined by the directrix and focus.  By allowing the focus to “Travel along the Directrix”, you create the parabola’s locus.  Quite elegant, I thought. As I was playing with the different ways to create the parabola and thinking about the interplay between the two distances in the parabola’s definition, I wondered about the potential positions of the distance segments. 1. What is the shortest length of segment CP and where could it be located at that length?  What is the longest length of segment CP and where could it be located at that length? 2. Obviously, point C can be anywhere along the directrix.  While the focus-to-P segment is theoretically free to rotate in any direction, the parabola definition makes that seem not practically possible.  So, through what size angle is the focus-to-P segment practically able to rotate? 3. Assuming a horizontal directrix, what is the maximum slope the focus-to-P segment can achieve? 4. Can you develop a single solution to questions 2 and 3 that doesn’t require any computations or constructions? CONCLUSIONS I fully realize that none of this is new mathematics, but I enjoyed the walk through pure mathematics and the enjoyment of developing ever simpler and more elegant solutions to the problem.  In the end, I now have a deeper and richer understanding of parabolas, and that was certainly worth the journey. ## Fun with Series
{ "domain": "wordpress.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9838471684931718, "lm_q1q2_score": 0.8401196477894054, "lm_q2_score": 0.8539127566694177, "openwebmath_perplexity": 1465.009807215267, "openwebmath_score": 0.6759624481201172, "tags": null, "url": "https://casmusings.wordpress.com/tag/quadratic/" }
java, game, parsing, file, libgdx Title: Convert Bitmap Font to Texture Atlas I wanted to render the textures that comprise a bitmap font glyph onto the screen directly as an Image in libGDX. When you make a bitmap font using a program (such as Hiero), it generates a text readable .fnt file along with a .png file that is the sprite sheet for the font. The only thing missing is a matching .atlas file to tell the location of the textures in that .png. This program takes a .fnt file as input and outputs a .atlas file that can be used with libGDX (and any engines that use the same type of atlas file). It parses the font file to find the names of the textures and their location on the sprite sheet. One reason I am seeking feedback is that this is the first program/code that I have put on Github with the intention of other people using it. It would be interesting to hear whether there are enough comments and enough documentation for others to understand and use software. Launcher.java public class Launcher { /** * The file name for the atlas generator must be passed in * without a file extension. */ public static void main(String[] args) throws IOException { String fileName = "test_dos437"; new FntToAtlasGenerator(fileName); } } FntToAtlasGenerator.java /** * The idea is to pass in the name of a .fnt file generated by Hiero * This program will generate a .atlas file that is compatible with libGDX * Next put the .atlas file and the .png that comes along with the .fnt file * into the android/assets folder of your libGDX project. * * @author baz * */ public class FntToAtlasGenerator { List<GlyphData> glyphs = new ArrayList<GlyphData>(); public FntToAtlasGenerator(String fileName) throws IOException {
{ "domain": "codereview.stackexchange", "id": 15336, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, game, parsing, file, libgdx", "url": null }
genetics, dna Mutation Now assuming a genome-wide mutation rate of 45 (Rhabari et al. 2016), and assuming that the number of mutations is Poisson distributed, then the probability of no mutation to happen is $e^{-45} ≈ 2.9 \cdot 10^{-20}$. Hence the probability becomes $\frac{1}{2^{46}} \cdot e^{-22.8} \cdot e^{-45} ≈ 10^{-44}$. Note that we assumed that the number of mutations and the number of crossovers are independent which is definitely wrong. If we were to know the correlation, we could make better estimates. This would bring our estimate to some higher probability.
{ "domain": "biology.stackexchange", "id": 8005, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "genetics, dna", "url": null }
nuclear-physics, mass-energy, antimatter, fusion Title: Is fission/fusion to iron the most efficient way to convert mass to energy? Is fission/fusion of any element to iron-56 (or nickel-62?) the best way to convert mass to energy, that doesn't involve black holes? In other words, will we be always limited to convert only about 1% of the mass available to energy? Are there other ways (using strangelets? antimatter?) to go beyond that limit? I exclude black holes as, as I understand, you can only extract a finite amount of energy by reducing their spin, so they are not viable for energy production on a cosmological scale. Matter-antimatter annihilation, such as an electron annihilating with a positron to form two high-energy photons, can convert 100% of the mass into radiation. So fission and fusion are far from the most efficient ways to convert mass into other forms of energy. Unfortunately, the universe appears to contain almost no antimatter.
{ "domain": "physics.stackexchange", "id": 58887, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "nuclear-physics, mass-energy, antimatter, fusion", "url": null }
• If $$f(x) \in \text{DCLIQUE}$$, then $$f(x) = \langle G', k \rangle$$ and $$G'$$ contains two disjoint cliques of size $$k$$ each. We know by construction of $$f$$ that $$x = \langle G, k \rangle$$ and $$G'$$ consists of two disjoint copies $$G^1 = (V^1, E^1)$$ and $$G^2 = (V^2, E^2)$$ of $$G$$. Since the copies are disjoint, a clique in $$G'$$ of size $$k$$ is either completely contained in $$V^1$$ or is completely contained in $$V^2$$. This means the original graph $$G$$ must have a clique of size $$k$$. That is, $$x = \langle G, k \rangle \in \text{CLIQUE}$$. • The function $$f$$ is polynomial time computable since given a graph $$G$$, we can create two copies of it in polynomial time. Since $$\text{DCLIQUE}$$ is both in $$\mathsf{NP}$$ and $$\mathsf{NP}$$-hard, it is $$\mathsf{NP}$$-complete. 4 No Peeking
{ "domain": "cs251.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.982287697666963, "lm_q1q2_score": 0.8030933542577086, "lm_q2_score": 0.8175744806385543, "openwebmath_perplexity": 278.10810952223625, "openwebmath_score": 0.9851598739624023, "tags": null, "url": "https://s23.cs251.com/Recitations/Rec_06_NP/contents.html" }
We also know that the variance of any random variable is ≥0, it could be zero i.e .(Var(X)=0) if and only if X is a constant (almost surely), therefore $V(X^* \pm Y^*)=V(X^*)+V(Y^*)\pm2Cov(X^*,Y^*)$ As Var(X*)=1 and Var(Y*)=1, the above equation would be negative if $Cov(X^*,Y^*)$ is either greater than 1 or less than -1. Hence $1\geq \rho(X,Y)=\rho(X^*,Y^*)\geq -1$. If $\rho(X,Y )=Cov(X^*,Y^*)=1$ then $Var(X^*- Y ^*)=0$ making X* =Y* almost surely. Similarly, if $\rho(X,Y )=Cov(X^*,Y^*)=-1$ then X*=−Y* almost surely. In either case, Y would be a linear function of X almost surely. We can see that the Correlation Coefficient values lie between -1 and +1. ## Pearson Correlation Coefficient use, Interpretation, Properties The correlation coefficient or Pearson’s Correlation Coefficient was originated by Karl Pearson in the 1900s. The Pearson’s Correlation Coefficient is a measure of the (degree of) strength of the linear relationship between two continuous random variables denote by $\rho_{XY}$ for population and for sample it is denoted by $r_{XY}$. The Correlation coefficient can take values that occur in the interval [1,-1]. If the coefficient value is 1 or -1, there will be a perfect linear relationship between the variables. A positive sign with a coefficient value shows a positive (direct, or supportive), while a negative sign with a coefficient value shows the negative (indirect, opposite) relationship between the variables. The zero-value implies the absence of a linear relation and it also shows that variables are independent. Zero value also shows that there may be some other sort of relationship between the variables of interest such as a systematic or circular relationship between the variables.
{ "domain": "itfeature.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9879462187092607, "lm_q1q2_score": 0.8054058485717938, "lm_q2_score": 0.8152324826183822, "openwebmath_perplexity": 499.7595505517718, "openwebmath_score": 0.8466764092445374, "tags": null, "url": "https://itfeature.com/correlation-and-regression-analysis/pearsons-correlation-coefficient" }
quantum-field-theory, antimatter, dirac-equation, helicity Title: Helicity of Antiparticles I'm really confused by the helicity and handeness of antiparticles. Consider the particle case, the plane wave solution is $\psi(x) = u(p)e^{-ip\cdot x}$, where $$u^s(p) = \begin{pmatrix} \sqrt{p\cdot \sigma}\xi^s\\ \sqrt{p\cdot \bar{\sigma}}\xi^s\end{pmatrix}.$$ Assuming the particle is ultra-relativistic and moving along the $+\hat{z} $ direction, if the particle spins up, then: \begin{align} u^{\uparrow}(p) &= \sqrt{2E} \begin{pmatrix} 0\\0\\1\\0 \end{pmatrix}, &h&=1 &&\Rightarrow \text{Right-handed}, \\ u^{\downarrow}(p) &= \sqrt{2E} \begin{pmatrix} 0\\1\\0\\0 \end{pmatrix}, &h&=-1&&\Rightarrow \text{Left-handed}, \end{align} everything is quite simple. The antiparticle case, $\psi(x) = v(p)e^{ip\cdot x}$, where $$v^s(p) =\begin{pmatrix} \sqrt{p\cdot \sigma}\eta^s\\ -\sqrt{p\cdot \bar{\sigma}}\eta^s\end{pmatrix} $$ with $\eta^{\uparrow} = \binom{0}{1}$ and $\eta^{\downarrow} = \binom{1}{0}$. Again with the assumptions of the particle is ultra-relativistic and moving along the $+\hat{z} $ direction: \begin{align} v^{\uparrow}(p)
{ "domain": "physics.stackexchange", "id": 78018, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, antimatter, dirac-equation, helicity", "url": null }
sql SELECT h.hacker_id -- 1) Tell me IDs FROM ( -- 4) A daily submission is SELECT s.hacker_id -- 5) just the ID FROM Submissions AS s -- 6) of a submission GROUP BY s.hacker_id, DATE(s.submission_date) -- 7) de-duplicated by date. ) AS h GROUP BY h.hacker_id -- 2) de-duplicated HAVING DAYS(@START_DATE, @END_DATE) = COUNT(*) -- 3) with daily submissions. WITH scores AS ( -- 01) everyone's daily scores are SELECT s.hacker_id -- 02) their id, ,DATE(s.submission_date) AS date -- 03) the date, ,COUNT(*) AS score -- 04) and their submission count FROM Submissions AS s GROUP BY date, s.hacker_id -- 05) for every hacker every day. ) SELECT s.date, MIN(s.hacker_id) -- 14) and print the lowest ID. FROM scores AS s -- 11) get the daily scores INNER JOIN ( -- 06) The high scores are SELECT hs.date, MAX(hs.score) AS score -- 07) the date and highest score FROM scores AS hs GROUP BY hs.date -- 08) each day. ) AS high_scores ON s.date = high_scores.date -- 12) for that day AND s.score = high_scores.score -- 13) that are the high score, GROUP BY s.date -- 09) For each day ORDER BY s.date -- 10) in order
{ "domain": "codereview.stackexchange", "id": 44008, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql", "url": null }
We can demonstrate that this inequality cannot hold for all $n$. It is clearer when logarithms are used: $$(1.5)^n < Cn^m$$ which means $$n \ln(1.5) < \ln(C) + m \ln(n)$$ and $$\ln(1.5) < \frac{\ln(C)}{n} + m \frac{\ln(n)}{n}.$$ The term $\ln(C)/n$ goes to zero as $n \to \infty$ and so does $\ln(n)/n$. However, since $\ln(1.5) > 0$ this is a contradiction. • I've seen that argument , I like it. can you make the part in the end actually do the math though? thanks. – Jorge Fernández Hidalgo Jan 13 '15 at 15:28 • Which part would you like me to clarify? The polynomial growth or the exponential inequality? – Joel Jan 13 '15 at 15:30 • The part at the very end where you say polynomial growth is slower than exponential growth. Everything else is perfect. – Jorge Fernández Hidalgo Jan 13 '15 at 15:31 • Changed as requested. :) – Joel Jan 13 '15 at 15:41 Consider repeated differences. That is, write down a row $p(1),p(2),p(3),\ldots$ and then in the next row the differences $p(2)-p(1),p(3)-p(2),p(4)-p(3),\ldots$ of the first row, then in the third row the differences of the second row, and so on. It should be well-known that a row obtained from a polynomial of degree $n$ produces values from a polynomial of degree $n-1$ in the next row and so on, so eventually we arrive at a degree $0$ (i.e., constant) row and from than on at all-zero rows. However, if we start this with the Fibonacchi sequence, the next row is the (shifted) Fibonacci sequence and we will never obtain an all-zero row.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429609670702, "lm_q1q2_score": 0.8062363319592819, "lm_q2_score": 0.8198933271118222, "openwebmath_perplexity": 419.70955972801505, "openwebmath_score": 0.7907012701034546, "tags": null, "url": "https://math.stackexchange.com/questions/1102712/proof-the-fibonacci-numbers-are-not-a-polynomial" }
slam, navigation, octomap, ros-kinetic, rtabmap-ros Title: Rtabmap_ros does not show octo map but points clouds are good Hi Im using two point Grey Chameleon3 mono camera set up as Master Salve and synchronized so can work as stereo camera. I have a installed ROS driver and able to publish the camera topics . Im using this ROS driver https://github.com/KumarRobotics/flea3. And with roslaunch flea3 stereo_node.launch left:=18081067 right:=17496474 camera:=stereo_camera can launch the driver. Im hand holding the cameras so for that case using this tutorial http://wiki.ros.org/rtabmap_ros/Tutorials/StereoHandHeldMapping about hand-held stereo mapping. This is my launch file that Im launching with rtabmap$ roslaunch rtabmap_ros stereo_mapping.launch rtabmap_args:="--delete_db_on_start --Vis/CorFlowMaxLevel 9000 --Stereo/MaxDisparity 80000 --Odom/Strategy 1 --Odom/GuessMotion true --Vis/EstimationType 1 --Vis/CorType 1 --Odom/ResetCountdown 1" left_image_topic:=/stereo_camera/left/image_rect_color queue_size:=40 stereo:=true rviz:=true rtabmapviz:=false <launch> <arg name="pi/2" value="1.5707963267948966" /> <arg name="optical_rotate" value=" 0 0 0 -$(arg pi/2) 0 -$(arg pi/2)" /> <node pkg="tf" type="static_transform_publisher" name="camera_base_link_rtabmap" args="$(arg optical_rotate) base_link stereo_camera/left 100" />
{ "domain": "robotics.stackexchange", "id": 31589, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "slam, navigation, octomap, ros-kinetic, rtabmap-ros", "url": null }
bioinformatics, rna-sequencing Title: determining meaning of basic biological keywords about C. elegans First of all I have to say that I have no biology background since I'm a undergraduate computer science student. Nowadays, for my research I need to use some of the databases related with bioinformatics (modENCODE and NCBI-GEO). Currently, I'm searching RNA-Seq data and I have found the following one in modENCODE website which is stored in NCBI-GEO database.RNA-Seq data In the description part of it, it is written that: Synchronized L1 worms cultured at 25degC for about 91 hrs to a point at which ~100% worms are resistant to 1% SDS treatment. Illumina sequencing of C. elegans dauer daf-2(el370) sample 2-1 DauerDAF2-2-1 polyA+ RNAseq random fragment library This sample is available through the modENCODE DCC (www.modencode.org) as: DauerDAF2-2-1 Local library name: DauerDAF2-2-1 Quality values are ASCII encoded Phred + 33.
{ "domain": "biology.stackexchange", "id": 3941, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "bioinformatics, rna-sequencing", "url": null }
keras # Compile model model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) # Fit model history = model.fit(X, Y, validation_split=0.46, nb_epoch=150, batch_size=3) A neural network is the wrong approach for a problem with a small training set. Even if you only have 2 features that are very representative of your function then 16 feature are not sufficient. As a very general rule of thumb I use 100 examples for each feature in my dataset. This then increases exponentially with every single different class you expect. 16 instances is not enough to train a neural network. You will always have huge error margins when applying your model on a testing set. Even more problematic is the fact that you are using a very deep neural network. This will require even more training instances to properly learn the function. I suggest you use a general machine learning technique such as SVM. This will likely result in better result. Try these techniques instead and see what results you get: k-NN, kernel SVM, k-means clustering. But, be warned 16 training instances is still very little.
{ "domain": "datascience.stackexchange", "id": 1587, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "keras", "url": null }
java, server, chat textArea = new JTextArea(); JScrollPane scroll = new JScrollPane(textArea); DefaultCaret caret = new DefaultCaret(); caret.setUpdatePolicy(DefaultCaret.ALWAYS_UPDATE); textArea.setEditable(true); textArea.setBorder(BorderFactory.createEtchedBorder()); GridBagConstraints gbc_textArea = new GridBagConstraints(); gbc_textArea.gridwidth = 3; gbc_textArea.insets = new Insets(2, 2, 5, 2); gbc_textArea.fill = GridBagConstraints.BOTH; gbc_textArea.gridx = 0; gbc_textArea.gridy = 0; this.getContentPane().add(scroll, gbc_textArea); JTextField textField = new JTextField(); textField.setBorder(BorderFactory.createEtchedBorder()); GridBagConstraints gbc_textArea_1 = new GridBagConstraints(); gbc_textArea_1.gridheight = 2; gbc_textArea_1.gridwidth = 2; gbc_textArea_1.fill = GridBagConstraints.BOTH; gbc_textArea_1.insets = new Insets(2, 2, 2, 5); gbc_textArea_1.gridx = 0; gbc_textArea_1.gridy = 3; this.getContentPane().add(textField, gbc_textArea_1); this.setVisible(true); log = new MainClassApp(); JButton btnNewButton = new JButton("Send"); btnNewButton.addActionListener(new ActionListener() {
{ "domain": "codereview.stackexchange", "id": 23132, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, server, chat", "url": null }
for the CIE IGCSE Maths exam. The Turning Point USA leader apparently thinks the number of counties a candidate wins matters more than the number of votes. Forums. The curve here decreases on the left of the stationary point and increases on the right. Complete the square, this gives us the following: Finding turning points of differentiable functions like polynomials is essentially a type of problem dealt with in differential calculus. 13 otal for question 5 is 3 marks) 6 By completing the square, find the coordinates of the turning point of the curve with the equation y + IOx— 8 You must show all your working- 33 €5/-33) Sometimes you go ‘uphill’, sometimes Home. What is the definition of a "turning point"? Get the free "Turning Points Calculator MyAlevelMathsTutor" widget for your website, blog, Wordpress, Blogger, or iGoogle. The gradient function for a curve is found by differentiating the equation of the curve. Turning point test Jump to: navigation, search In statistical hypothesis testing, a turning point test is a statistical test of the independence of a series of random variables.. turning point A point on the graph at which the slope of the tangent changes its sign. Posts about turning points written by corbettmaths. / Maths / Exam Questions - Stationary points. A polynomial of degree n will have at most n – 1 turning points. Never more than the Degree minus 1 The Degree of a Polynomial with one variable is the largest exponent of that variable. Stationary points are also called turning points. Exam Questions – Stationary points. Explanation of turning point then the graph starts to resemble the graph of y = axn where axn is the term with the highest degree. learn more about the important moments in its development. Finding turning points of curves questions. Differentiation - Finding Turning Points:1 MATHSprint, 2013 Name: Class/Set: Differentiation - Finding Turning Points www ..mathsprint.co.uk 1: 1 Find the co-ordinates and nature of any turning points… a)(i) a)(ii) b) c) 3) View Solution. Calculus. An increasing function is a function where: if x 1 > x 2, then f(x 1) > f(x 2) , so as x increases, f(x) increases.A decreasing function is a … Increasing and Decreasing Functions. At
{ "domain": "poznan.pl", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9653811591688146, "lm_q1q2_score": 0.8002419015378691, "lm_q2_score": 0.8289388019824946, "openwebmath_perplexity": 738.9254417343516, "openwebmath_score": 0.43471258878707886, "tags": null, "url": "http://bip.pks.poznan.pl/0bznb/turning-points-math-96605e" }
homework-and-exercises, newtonian-mechanics, forces Title: Force between two blocks on a friction-less surface The question is, A single horizontal force $F$ is applied to a block of mass $M_1$ which is in contact with another block of mass $M_2$ as shown in the figure. The surfaces are friction-less. What will be the force between the blocks? I have another question. If the blocks are in contact and they do not have elasticity, should not the force between them be $F$ ? Not quite. The key is that both the bodies move with the same acceleration. This is proven by Newton's first law and that both the bodies are perfectly rigid. If they move with the same acceleration we can consider both the bodies as one body of mass $M_1 + M_2$. Newtons second law is $\Sigma$$F=ma$, so the acceleration of both the bodies is $F/(M_1+M_2)$. Now think of them as two separate bodies. $M_2$ has to move with an acceleration of $F/(M_1+M_2)$. Normal force of $M_1$ on $M_2$ is what accelerates $M_2$. So force of $M_1$ on $M_2$ is given by $M_2*F/(M_1+M_2)$.
{ "domain": "physics.stackexchange", "id": 17316, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, newtonian-mechanics, forces", "url": null }
java, strings, reinventing-the-wheel, integer main() You are not using the array "reverseStr". The String "digits" is not used either. When I started your program the first time, I didn't know what to do, because your program didn't tell me. Before scanning a user input, I would tell the user to input something. System.out.println("Please enter number:"); Scanner scn = new Scanner(System.in); int number = scn.nextInt(); If you want to improve this point even further (which is highly recommended!), you can use something like that (you will have to use import java.util.InputMismatchException;): System.out.println("Please enter number:"); Scanner scn = new Scanner(System.in); int number; while(true) { try { number = scn.nextInt(); break; } catch(InputMismatchException e) { System.out.println("That's not a number!"); scn.nextLine(); } } This will check, whether the user really enters a number. If the user enters something else, the program will ask him again to enter a number. Something like that is considered bad practice: if(number != 0) { length = ( int ) (Math.log10(number) + 1 );} Please write if(number != 0) { length = (int) (Math.log10(number) + 1); } instead. "valStr" is not necessary. You can just write: strSeq = returnDigitString(remainder) + strSeq; But this really is a minor point and just my personal opinion. It's fine to use an extra variable for this. Codestructure I would use an extra method for the content of the main-method. Just use the main-method to call the new method.
{ "domain": "codereview.stackexchange", "id": 38102, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, strings, reinventing-the-wheel, integer", "url": null }
### Sockets The final part we need to cover about connections are sockets. A socket is a combination of IP address and ports that is unique on every PC for every connection. It consists of the following : • IP and port of the client side of the connection • IP and port of our the remote part of the connection • This part is mostly used for TCP connections, we won’t use it • Type of connection ( UDP, TCP, etc… ) We’ll be using sockets as a unit to keep track of a connection. You can look at scokets to the sockets you plug your electric devices to. Following that analogy, the wire would be the connection ( network cable ). So, in essence, it’s what connects your application to the Internet. I realize this all might be a lot of information and hard to wrap your hand around it all, but it’ll get clearer when we put it to use. # SLD_net Now that we know a tiny bit about UDP connections, let’s try to set up one ourselves. For that purpose, we need the SDL_net library. It is capable of setting up and maintaining both UDP and TCP connections. Since UDP connections are way simpler, we’ll only cover that for now. Networking is, just like text rendering and .png loading, a separate part of SDL called SDL_net. We install it the exact same way as before : ### Installation Installing SDL2_net is done exactly like SDL2_image. Just replace SDL2_image with SDL2_net Here’s the short version : #### Linux For Linux you can use need to install -lSDL2_net or -libSDL2_net or -SDL2_net ( the actual name might be different in different distributions. ) The linker flag is -lSDL2_net The process is more or less identical to that of setting up SDL2_image. If you can’t find SDL2_net in any repositories and it’s not installed by default, you might have to compile it yourself. For more information, see my blog post about setting up SDL2. #### Windows Similar to setting up SDL2 base. The difference is that you have to download the development files for SDL2_net
{ "domain": "headerphile.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.989830340446144, "lm_q1q2_score": 0.8226906122311185, "lm_q2_score": 0.831143054132195, "openwebmath_perplexity": 992.7625765702774, "openwebmath_score": 0.42407405376434326, "tags": null, "url": "http://headerphile.com/tag/game-programming/" }
$$\int_0^4 2xf(x^2) \, dx$$ = $$\int_0^{16} f(u) \, du = 9$$ So there is really not evaluating occuring, just getting the first integral to match the second integral. So if i'm not mistaken, if the limit values in the first integral could not be made to perfectly match the second integral then finding a value for this problem would be rendered impossible. And on the same note, if the function in the first integral was such that after substitution yielded a differing function, say ∫(u^2)f(u)du, then the problem would also be rendered impossible. Thanks for the boost on this problem LCKurtz. Are my conclusions about this problem on track here? 4. Sep 4, 2012 ### LCKurtz Yes, you have it. The problem was cooked up to work just right and give you practice with the u substitution. Remember whenever you express an integral with du you must also use the u limits.
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9780517421267413, "lm_q1q2_score": 0.8332539221748647, "lm_q2_score": 0.8519528019683105, "openwebmath_perplexity": 758.958970130056, "openwebmath_score": 0.9146053194999695, "tags": null, "url": "https://www.physicsforums.com/threads/i-am-asked-to-find-xf-x-2-dx-if-f-x-dx-9.633465/" }
php, mysql, mysqli /** * Wrapper for MySQLi_STMT::attr_get(). * * @param int $attr The attribute you want to get. * @return mixed False if the attribute is not found, otherwise return value of the attribute. * */ public function getAttr($attr) { return $this->statement_object->attr_get($attr); } /** * Wrapper for MySQLi_STMT::attr_set(). * * @param int $attr The attribute you want to set. * @param int $mode The value to assign to the attribute. * */ public function setAttr($attr, $mode) { $this->statement_object->attr_set($attr, $mode); } /** * Wrapper for MySQLi_STMT::data_seek(). * * @param int $offset * */ public function dataSeek($offset) { $this->statement_object->data_seek($offset); } /** * Wrapper for MySQLi_STMT->errno. * * @return int Error number for the last execution. * */ public function getErrorNo() { return $this->statement_object->errno; } /** * Wrapper for MySQLi_STMT->error. * * @return string Error message for last execution. * */ public function getErrorMessage() { return $this->statement_object->error; } /** * Wrapper for MySQLi_STMT->field_count. * * @return int Number of fields in the given statement. * */ public function getFieldCount() { return $this->statement_object->field_count; }
{ "domain": "codereview.stackexchange", "id": 367, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, mysql, mysqli", "url": null }
javascript As a side note, I've seem people use switch (true) before when they want to use some logic in their case's instead of just hard coded values. Kind of a nice alternative to an if-else construct, and would probably perform better than looping over an object using a for-in loop. You could still keep the sizes in an object: var sizes = { 100 : 'url_t', 240 : 'url_s', 320 : 'url_n', 500 : 'url_m', 640 : 'url_z', 800 : 'url_c', 1024 : 'url_l' }; function getSize(number) { switch (true) { case number < 100: return sizes[100]; case number < 240: return sizes[240]; case number < 320: return sizes[320]; case number < 500: return sizes[500]; case number < 640: return sizes[640]; case number < 800: return sizes[800]; default: return sizes[1024]; } } Edit: A problem arises if you need to change the sizes for your application. That's why I'm leaning towards the first function, which just a switch statement and hard coded values.
{ "domain": "codereview.stackexchange", "id": 8643, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript", "url": null }
javascript, angular.js Title: Angular Checkbox Filtering I'm very new to Angular, and trying to get my head around whether or not I'm doing it the Right Way (tm). I want to filter a set of results by their properties, where if none of the filters are checked, all the results are displayed, but if one or more filters are checked, all results which match the checked properties are displayed. I've setup a simple demo with the colours of fruit. The JSFiddle is available at http://jsfiddle.net/mattdwen/u5eBH/2/ The HTML has a series of checkboxs and a repeating list for the set results: <div ng-app="fruit"> <div ng-controller="FruitCtrl"> <input type="checkbox" ng-click="includeColour('Red')"/> Red</br/> <input type="checkbox" ng-click="includeColour('Orange')"/> Orange</br/> <input type="checkbox" ng-click="includeColour('Yellow')"/> Yellow</br/> <ul> <li ng-repeat="f in fruit | filter:colourFilter"> {{f.name}} </li> </ul> Filter dump: {{colourIncludes}} </div> </div> And the JS adds the checked properties to a scope array, and the filter checks if the fruit colour is in that array: 'use strict' angular.module('fruit', []); function FruitCtrl($scope) { $scope.fruit = [ {'name': 'Apple', 'colour': 'Red'}, {'name': 'Orange', 'colour': 'Orange'}, {'name': 'Banana', 'colour': 'Yellow'}]; $scope.colourIncludes = [];
{ "domain": "codereview.stackexchange", "id": 4448, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, angular.js", "url": null }
• Are you sure there was not a "$2$" in front of the first surface integral? – JG123 Aug 17 '19 at 15:25 • yes, I've attached an image of his work – financial_physician Aug 17 '19 at 16:07 • but if you think it makes more sense with a 2 and could explain how you got where you're at, might help us make some progress – financial_physician Aug 17 '19 at 16:23 Well here is my rationale: The ellipsoid in question can be represented by z=$$\pm(\sqrt {64-4x^2-9y^2}$$). The "top half" is given by z=$$(\sqrt {64-4x^2-9y^2}$$), with $$4x^2+9y^2\le64$$. In other words, we have that: -$$(\frac{\sqrt {64-4x^2}}{3})$$ $$\le$$y$$\le(\frac{\sqrt {64-4x^2}}{3})$$ and $$-4\le$$x$$\le4$$ Hence, the surface area of this "top half" will be the first surface integral that your professor wrote. Now consider the portion of the ellipsoid above the plane $$z=1$$ (which will have the same surface area as the portion below the plane $$z=-1$$). We now are computing a surface integral with the same surface z=$$(\sqrt {64-4x^2-9y^2}$$) but we have that $$4x^2+9y^2\le63$$. (This can be obtained by plugging $$z=1$$ into the expression for the ellipsoid in question). In other words, we have that: -$$(\frac{\sqrt {63-4x^2}}{3})$$ $$\le$$y$$\le(\frac{\sqrt {63-4x^2}}{3})$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9833429634078179, "lm_q1q2_score": 0.8338431340646427, "lm_q2_score": 0.847967764140929, "openwebmath_perplexity": 227.60518806836214, "openwebmath_score": 0.9211289286613464, "tags": null, "url": "https://math.stackexchange.com/questions/3325615/surface-area-of-an-ellipsoid-above-a-given-plane" }
black-holes, astrophysics, galaxies Title: What happens when the black hole at a galactic core eats the galaxy? I'm making several assumptions, not sure if any are correct: there is a black hole at the center of a galaxy the black hole is eating the galaxy
{ "domain": "physics.stackexchange", "id": 8974, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "black-holes, astrophysics, galaxies", "url": null }
This is $m = (1+3x+x^2)^2$ So when x is an integer, this shows that m is a perfect square, without induction. • How did you factorise the quartic? – Ben Millwood Jun 7 '12 at 9:47 • @benmachine: I basically used Phira's process (below) and guessed the constant. – daniel Jun 7 '12 at 9:51 • @benmachine you can also do $(x^2+ax+b)^2 = 1+6x+11x^2+6x^3+x^4$ and that is easy to solve for $a$ and $b$. – picakhu Jun 7 '12 at 13:26 I think there are two issues here. One is constructing the quartic, which just depends on you doing the algebra correctly. The second is proceeding to factorise the quartic. It would be easier to factorise it if you know what the factorisation is going to be. To discover this, I tried a few examples. For $p=7$, the quartic gives $5041=71^2$. For $p=14$, the quartic gives $57121=239^2$. I noticed that $71=72-1=8\times9-1$ and $239=15\times16-1$. This suggested that the quartic was $((p+1)(p+2)-1)^2$. Once you know the answer, it is easy to find it! Take $p^2$ common after multiplying. Then put $p +1/p =y$ and solve. I have to add what I think is a 'dumb' way to do it by hand (with paper) as opposed to Alex B. succinct cleverness: First, multiply out the product to get $p^4 + 6p^3 + 11p^2 + 6p + 1$. Since this is a square, it must be a quadratic $p^2 + x p + y$. Squaring the quadratic, ignoring a lot of the cruft, and just looking at just the second and last coefficients $$6 = x + x$$ and $$1 = y^2$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.981735723251272, "lm_q1q2_score": 0.8324802443367109, "lm_q2_score": 0.8479677622198946, "openwebmath_perplexity": 530.6470112543523, "openwebmath_score": 0.994339644908905, "tags": null, "url": "https://math.stackexchange.com/questions/155040/prove-that-the-product-of-four-consecutive-positive-integers-plus-one-is-a-perfe/155050" }
If $B$ is a 2-blade (also called a bivector), then you should be able to imagine this directly: if $a$ lies entirely in $B$, then $a \cdot B$ is just the vector perpendicular to $a$ in $B$. If $a$ does not lie entirely in $B$, then it can be decomposed into a tangential part and a normal part. We throw away the normal part, and the previous logic applies for the tangential part. If $B$ is a 3-blade (a trivector), then in 3d space $a$ must lie in $B$ (for there is no 3d volume that a vector does not help span), and the product $a \cdot B$ is the "Hodge dual", or the plane perpendicular to $a$. In this light, the dot product of vectors may actually be the most non-intuitive part of this reasoning. When you take the dot product, there's only a scalar left--there's no vector or other higher dimensional object left to be orthogonal to $a$. Again, this is why I emphasize that $a \cdot B$ is the part of $B$ orthogonal to the projection of $a$ onto $B$. When $B$ is a vector, it's clear there is no other vector or anything else that can be orthogonal to the projection of $a$, for $B$ and that projection are parallel, so the result is necessarily just a scalar. When one calculates A.B, two measurements happen: measurement of how small the angle between them is, and how long A and B are. A.B basically means projection length of A on B, with this length then scaled by the absolute length of B. One way to think about the interpretation of the dot product is to think how would one maximise or minimise the dot product between two vectors. Let's assume we are trying to maximise the dot product between two vectors that we can modify:
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9891815500714293, "lm_q1q2_score": 0.8484883890908539, "lm_q2_score": 0.8577681104440172, "openwebmath_perplexity": 166.09800919993327, "openwebmath_score": 0.9011787176132202, "tags": null, "url": "https://math.stackexchange.com/questions/414776/what-is-the-use-of-the-dot-product-of-two-vectors/415174" }
one fair dice (6-sided), what is its entropy before the result is observed? 1 point 2. \(p_i=\frac{1}{6}$$ for i = 1, 2, ,6. Transmuting Dice, Conserving Entropy. PDF Foundations of Computer Security. One way to define the quantity "entropy" is to do it in terms of the multiplicity. The log of a probability (value < 1) is negative, the negative sign negates it. INTRODUCTION TO INFORMATION THEORY. Answer (1 of 3): Intuitive answer: it is the number of bits needed on average to store an outcome. you'll get a uniform distribution, which is what we would expect from a fair die. In most cases, at least where you’re interested in playing a fair game, you want to be pretty sure that there’s a random distribution of the dice roll results. (A fair die would be expected to have average 7/2=3. The result of a fair die (6 possible values) would require on average log26 bits. Note that there is a correlation between uncertainty and information content, so we can assume that the result of rolling a normal 6-sided dice will contain more info then the 4-sided dice, and less info than the 8-sided dice. Entropy : a measure of uncertainty. 5 that one would expect from a fair die. Consequently, interpreting the output of the dice roll as a random output then derives randomness with 3 bits of entropy. Another example, a fair dice would have a probability of 1/6 for each of its sides, thus giving an entropy of 2. The entropy of the outcome will be: $$H = \mathop \sum \limits_{i = 1}^{6} {\frac{1}{6}}{\log _2}\frac{1}{{{1/6}}}$$. This post is all about dice and maximum entropy. Students also viewed these Mathematics questions A pair of fair dice is rolled 12 times. For the last example, the entropy of 1. You say the fair die has an entropy of Inline Formula ? Let us look at an even more simple object: the fair coin. is the exponential of Shannon Entropy, where Shannon Entropy is. Will the entropy of the dice be higher or lower than the answer from. 5849 However, the entropy
{ "domain": "steinpils.de", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9799765581257486, "lm_q1q2_score": 0.8079533630296167, "lm_q2_score": 0.8244619285331332, "openwebmath_perplexity": 430.41855107323534, "openwebmath_score": 0.7215729355812073, "tags": null, "url": "https://steinpils.de/entropy-of-a-fair-dice.html" }
c++, beginner, rock-paper-scissors This will make the code easier to read. Also this matches up to zero/one/two so it makes accessing arrays easier (see below). Codereview This looks like a C interface. Did you not want to write a C++ application? char getUserInput(); char getCompInput(); void showValue(char x); void getWinner(char x, char y); int getScore(char x, char y); When reading data from an untrusted source (always - but most specifically humans), you need to validate that the read worked. int matches; // This read assumes that the next character on the // input stream is a number. If it is not then the // read will fail (I am not sure if `matches` is set to zero // on a failure) but I would not make that assumption. // defensive coding and all. std::cin >> matches; // But even if a read failure set matches to zero. // A read failure will put the stream into an error state. // Any subsequent read operation will not work until you // get the stream out of the error state. while (matches < 1) The correct way to do this is: int matches = 0; std::cout << "How many games do you want to play?: \n";
{ "domain": "codereview.stackexchange", "id": 45307, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, beginner, rock-paper-scissors", "url": null }
quantum-mechanics, operators, hilbert-space, wavefunction, observables it is a smooth function meaning it is infinitely differentiable now I think that these conditions are the things that define the wave function space. Why are you trying to do this? Also, we know from general principles that the actual electron density functions that properly solve the quantum physics problems, will have a cusp at the nuclei that contain the information of the charge of the nuclei, see Kato's theorem. This means that the wavefunction will be weird at the nuclei. I really do not think that trying to postulate quantum theory from an approach like yours will be fruitful, given the much better postulates of quantum theory that we currently have. And I know that a physical quantum state is actually a complex plane passing through the origin in the above defined state space of wave function which also happens to be a complex vector space. I am joined by others in being unable to understand what you might be trying to mean here. It definitely does not even sound correct. consider any arbitrary function $f(x,p)$ where $x$ is position and $p$ is momentum. and $f$ is such that it emits a same physical dimension real value for all $(x,p)$ set. so we can say that a $f(x,p)$ is a physical observable with its own physical dimension. What do you mean by ``physical dimension"? What is "physical dimension real value"? so for a physical state can we get all the valid PDFs for the suspected observables? like for physical observable. Your problem statement is too unclear for us to give a definitive statement either way. We do have some theorems to guarantee validity for certain classes, and also counterexamples for cases where they fail. You have to be much more precise to be able to talk about this topic. or some random function with some random dimension What do you mean by ``dimension"? Like, for example, if we take position operator whose domain is the above defined state space, then we can write $$\hat x\varphi(x)=x\varphi(x)$$ so do the resulting state also satisfy the above conditions?
{ "domain": "physics.stackexchange", "id": 96254, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, operators, hilbert-space, wavefunction, observables", "url": null }
organic-chemistry, synthesis My answers (a) I mentioned $1~\mathrm{eq}$ of sodium hydroxide as sodium hydroxide reacts with glycine in a 1:1 ratio. (b) I mentioned that sodium carbonate cannot be used in place of sodium hydroxide as sodium Carbonate is not basic enough to catalyst the reaction. (c) Sodium chloride (d) I mentioned that during the process when the combined organic layers were acidified to $\mathrm{pH}=1$, it could destroy the product by hydrolysing it hence giving a yield of $0\%$. I'm unsure whether my answers are correct though, could anyone explain? Also, are there any resources on the web that contain practice questions of a similar nature to this? I agree with Ron's answer except for d. Using an extra equivalent of NaOH is the right procedure since you need to "mop up" the HCl that is produced when $\ce{CbzCl}$ reacts with the amino group. What is not correct is this: When you wash the reaction mixture with ether you get rid of any impurities and your product should stay in the aqueous phase. So you have to acidify the aqueous layer and not the ether one. pH 1 should be fine for the Cbz unless you leave it there and go away for the weekend.
{ "domain": "chemistry.stackexchange", "id": 2196, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "organic-chemistry, synthesis", "url": null }
• Did you manage the $dx$? (I think you should have another $\sec^2(t)$.) – Ian May 15 '15 at 14:23 • @Ian Thanks. corrected. – Leg May 15 '15 at 14:28
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9643214480969029, "lm_q1q2_score": 0.8378282835438394, "lm_q2_score": 0.8688267643505193, "openwebmath_perplexity": 314.40359479979827, "openwebmath_score": 0.9696581959724426, "tags": null, "url": "https://math.stackexchange.com/questions/1283195/can-we-determinine-the-convergence-of-int-0-infty-fracx2n-1x2-1" }
classical-mechanics Title: Two interacting particles on sphere drift to sphere poles Suppose we have two particles which can move on sphere of radius $r$, and they attract to each other so that their potential energy is $g(d)=ad$ where $d$ is distance between them. I've found Lagrangian, it looks like this (in spherical coordinates): $$L=\frac{r^2}2\left(m_1\left(\dot\theta_1^2+\dot\varphi_1^2\sin^2\theta_1\right)+m_2\left(\dot\theta_2^2+\dot\varphi_2^2\sin^2\theta_2\right)\right)-g\left(l_{arc}\left(\theta_1,\varphi_1,\theta_2,\varphi_2\right)\right),$$ where $l_{arc}$ is arc length between two points on sphere: $$l_{arc}=2r\arcsin\left(\frac1{\sqrt2}\sqrt{1-\cos\theta_1\cos\theta_2-\cos\left(\varphi_1-\varphi_2\right)\sin\theta_1\sin\theta_2}\right)$$ So, equations of motion for particle $i$ look like:
{ "domain": "physics.stackexchange", "id": 9197, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "classical-mechanics", "url": null }
cosmology, spacetime, universe Title: Can a physical object escape the universe? Can a particle, like an electronic or photon, leave the universe? If the photon for instance travels out toward the edges of the universe, assuming it is flat, will it encounter an invisible wall, or will it leave the universe? To the best of our knowledge the universe does not have a boundary or an edge.
{ "domain": "physics.stackexchange", "id": 93325, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cosmology, spacetime, universe", "url": null }
information-theory, communication, superdense-coding Alice and Bob have agreed on the order in which ebits will be used up, thus, if Bob receives the message from Alice during the $n^{\rm th}$ minute since they start the procedure (say, Alice uses $1$ extra classical bit to ping Bob that she's starting so Bob can keep the clock), Bob knows that he should use the $n^{\rm th}$ ebit to receive the message. Once Bob has picked the right ebit, if the classical bit he received was $0$, he measures his share of the ebit in $Z$ basis, otherwise he measures it in $X$ basis -- and voila, he recovers the second bit of classical information, namely, $b$. Of course, it would take Alice $2$ tries, on average, to get the $b$ that she wants to send as the result of her measurement of her share of ebit. Thus, all in all, we have $2n$ ebits $+$ $n$ bits $=$ $2n$ bits. The protocol you describe is correct, but the resource estimation is wrong. Furthermore, something like superdense coding with purely classical bits is prohibited by the No-Signaling Principle. This protocol sends more than $n$ cbits Bob has communication channel $\mathcal{C}$, and he conditions his decisions on the information he can learn from $\mathcal{C}$, whether he received a message or not. The communication protocol between Alice and Bob is initialized with them sharing a sequence of Bell states $|\Phi_i\rangle = (|00\rangle + |11\rangle)/\sqrt{2}$ ($i$ just indicates which timestep corresponds to which Bell state). When communication begins, both parties set $i=0$. Then, Alice behaves as you described, and Bob behaves as follows: Define a variable $z\in\{0, 1, \emptyset\}$ While $b$ is unknown:   1. Check $\mathcal{C}$:    a. If no signal was received: $z \leftarrow\emptyset$    b. If a signal $a \in \{0, 1\}$ was received, $z \leftarrow a$
{ "domain": "quantumcomputing.stackexchange", "id": 3994, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "information-theory, communication, superdense-coding", "url": null }
java, beginner I would also change the parameters for Album.addSong() and removeSong() to take a Song instance instead of making Album responsible for instantiating Songs. And a tip. Use final when declaring fields that don't need to change, especially collections. This reduces the number of mutable things you have to keep track of. When you have a non-final List field, it is mutable in two different ways: you could change the contents of the list or replace the list entirely. Keep it simpler by restricting that to one way of modifying it.
{ "domain": "codereview.stackexchange", "id": 38649, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, beginner", "url": null }
photons, atomic-physics, spectroscopy Title: Is the emission spectrum of a muonic atom different? From my quick investigation, the spectrum is based on the Rydberg formula, and with a small change, would lead to $$ {1 \over \lambda_\mu} = {m_\mu \over m_e} \left( R \left( {1\over n_1^2} - {1\over n_2^2} \right)\right) $$ where $m_\mu$ is the mass of a muon. So, taking hydrogen as an example, we would observe similar bands, shifted into the x-ray/gamma range. Is this correct? Almost but not quite. Qualitatively the spectrum is the same with the $1/n^2$ spacing, but the scale of the spectrum is set by the reduced mass $\mu$, $$\mu = \frac{1}{\frac{1}{m_l}+ \frac{1}{m_p}}$$ where $m_p$ is the proton mass and $m_l$ is the lepton (muon or electron) mass. Since $m_p \approx 2000 m_e$, it is not a large error to take $\mu = m_e$ for an electronic hydrogen atom. But for the muon, $m_\mu \approx 200 m_e \approx 0.1 m_p$, so the error is quite large, around 10%.
{ "domain": "physics.stackexchange", "id": 16459, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "photons, atomic-physics, spectroscopy", "url": null }
modeling, time-domain-astronomy, gravitational-waves Title: How to derive gravitational-wave frequency vs time from strain vs time Suppose I have a time-series of the gravitational-wave strain amplitude as a (discrete, i.e., an array of numbers) function of time. The figure below is just illustrative. I am not using measured LIGO/Virgo data, as that would require windowing the data and many other steps; rather, I'm getting my strain vs time from a waveform model such as surrogate models. Question: how does one derive the gravitational-wave frequency as a function of time? I think that this involves the discrete Fourier transform, for example with python using the fast Fourier transform provided by the scipy library's fft function, but I'm unsure how to arrive at the frequency as a function of time in theory. Any help is greatly appreciated, and sources are very much welcome. As shown in the figure below, in essence I'm wondering how to start from the above time series and arrive at the bottom time series. Thanks! source It turns out there are many ways to do this. A, conceptually, straight forward way is to differentiate the phase, $\Phi_{lm}$, of the gravitational wave. Expanding the strain of the gravitational wave with spherical harmonics, each mode is complex-valued, we have $$ h_{lm}(t) = h_+(t) -ih_x(t) = A_{lm}(t) e^{i\Phi_{lm}(t)}$$ where $A_{lm}$ is the amplitude of the wave. The frequency is then, $$ f_{lm}(t) = \frac{1}{2\pi}\frac{d}{dt}\Phi_{lm}(t)$$.
{ "domain": "astronomy.stackexchange", "id": 6047, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "modeling, time-domain-astronomy, gravitational-waves", "url": null }
to solve the entered equation for real and complex roots. What are the coefficients, if you have the following expression: -3 + 1/2. In simple term, you get the quadratic formula by simply solving the quadratic equation via completing the square. Example 1 … Quadratic Equation Solver. If you can set your calculator's mode to a + bi you should be able to even calculate imaginary solutions. Add Quadratic Formula Calculator to your website through which the user of the website will get the ease of utilizing calculator directly. The standard form of a quadratic equation is as follow: with a ≠ 0, it has the solution of the form: There are different quadratic formula steps that you have to follow to get successful quadratic equation solution: First, of all examine the give of the form of ax2 + bx + c, and then determine the coefficients a, b, and c. The ‘a’ is said to be coefficient that appears multiplying the quadratic term x^2x. You just need to add your values in the 3 inputs of the quadratic equation calculator. For example, for the quadratic equation below, you would enter 1, 5 and 6. About quadratic equations Quadratic equations have an x^2 term, and can be rewritten to have the form: a x 2 + b x + c = 0. The easiness with which my son uses it to learn to address complex equations is a truly marvelous. First, you have to find the vertex of x: You have to plug the value of x in equation 2x2 – 4x-1. Quadratic Formula Calculator Instructions: This quadratic formula calculator will solve a quadratic equation for you, showing all the steps. They all are real numbers that not dependent on x. Our quadratic calculator can also help you if you can put the equation in this form: This quadratic formula calculator is a tool that helps to solve a quadratic equation by using a quadratic formula or complete the square method. Also, it is important to note that the numerals i.e. Our quadratic equation solver will solve a second-order polynomial equation such as $$ax^2 + bx + c = 0$$ for $$x$$, where $$a ≠ 0$$, using the quadratic formula.. Quadratic
{ "domain": "p-palaceimmobilier.com", "id": null, "lm_label": "1. Yes\n2. Yes", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9766692366242306, "lm_q1q2_score": 0.8117518481198199, "lm_q2_score": 0.8311430499496096, "openwebmath_perplexity": 548.1103188028409, "openwebmath_score": 0.6156752109527588, "tags": null, "url": "http://blog.p-palaceimmobilier.com/qnuovr/1192a1-quadratic-formula-calculator" }
python, algorithm, python-3.x, unit-testing, checksum INVERSE_TABLE = (0, 4, 3, 2, 1, 5, 6, 7, 8, 9) PERMUTATION_TABLE = ( (0, 1, 2, 3, 4, 5, 6, 7, 8, 9), (1, 5, 7, 6, 2, 8, 3, 0, 9, 4), (5, 8, 0, 3, 7, 9, 6, 1, 4, 2), (8, 9, 1, 6, 0, 4, 3, 5, 2, 7), (9, 4, 5, 3, 1, 2, 6, 8, 7, 0), (4, 2, 8, 6, 5, 7, 3, 9, 0, 1), (2, 7, 9, 3, 8, 0, 6, 4, 1, 5), (7, 0, 4, 6, 9, 1, 3, 2, 5, 8) ) @classmethod def calculate(cls, input_: str) -> str: """Calculate the check digit using Verhoeff's algorithm""" check_digit = 0 for i, digit in enumerate(reversed(input_), 1): col_idx = cls.PERMUTATION_TABLE[i % 8][int(digit)] check_digit = cls.MULTIPLICATION_TABLE[check_digit][col_idx] return str(cls.INVERSE_TABLE[check_digit]) @classmethod def validate(cls, input_: str) -> bool: """Validate the check digit using Verhoeff's algorithm""" check_digit = 0 for i, digit in enumerate(reversed(input_)): col_idx = cls.PERMUTATION_TABLE[i % 8][int(digit)] check_digit = cls.MULTIPLICATION_TABLE[check_digit][col_idx] return cls.INVERSE_TABLE[check_digit] == 0
{ "domain": "codereview.stackexchange", "id": 34944, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, algorithm, python-3.x, unit-testing, checksum", "url": null }
beginner, object-oriented, c, parsing, polymorphism Node malloc_operator_node(Operator *op, size_t num_children) { if (num_children > MAX_ARITY) { // Max. arity exceeded return NULL; } OperatorNode res = malloc(sizeof(struct OperatorNode_) + num_children * sizeof(Node)); if (res == NULL) return NULL; for (size_t i = 0; i < num_children; i++) { res->children[i] = NULL; } res->type = NTYPE_OPERATOR; res->op = op; res->num_children = num_children; return (Node)res; } void free_tree(Node tree) { if (tree == NULL) return; if (get_type(tree) == NTYPE_OPERATOR) { for (size_t i = 0; i < get_num_children(tree); i++) { free_tree(get_child(tree, i)); } } free(tree); } NodeType get_type(Node node) { return *node; } Operator *get_op(Node node) { return ((OperatorNode)node)->op; } size_t get_num_children(Node node) { return ((OperatorNode)node)->num_children; } Node get_child(Node node, size_t index) { return ((OperatorNode)node)->children[index]; } Node *get_child_addr(Node node, size_t index) { return &((OperatorNode)node)->children[index]; } void set_child(Node node, size_t index, Node child) { ((OperatorNode)node)->children[index] = child; } char *get_var_name(Node node) { return ((VariableNode)node)->var_name; } double get_const_value(Node node) { return ((ConstantNode)node)->const_value; }
{ "domain": "codereview.stackexchange", "id": 36170, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, object-oriented, c, parsing, polymorphism", "url": null }
deep-learning, tensorflow, keras 0.820463 0.176808 0 -0.763975 -0.342827 1 0.763975 0.342827 0 -0.679563 -0.491918 1 0.679563 0.491918 0 -0.57112 -0.618723 1 0.57112 0.618723 0 -0.443382 -0.71888 1 0.443382 0.71888 0 -0.301723 -0.78915 1 0.301723 0.78915 0 -0.151937 -0.82754 1 0.151937 0.82754 0 9.23077e-06 -0.833333 1 -9.23077e-06 0.833333 0 0.148202 -0.807103 1 -0.148202 0.807103 0 0.287022 -0.750648 1 -0.287022 0.750648 0 0.411343 -0.666902 1 -0.411343 0.666902 0 0.516738 -0.559785 1 -0.516738 0.559785 0 0.599623 -0.43403 1 -0.599623 0.43403 0 0.65738 -0.294975 1 -0.65738 0.294975 0 0.688438 -0.14834 1 -0.688438 0.14834 0 0.692308 1.16667e-05 1 -0.692308 -1.16667e-05 0 0.669572 0.144297 1 -0.669572 -0.144297 0 0.621838 0.27905 1 -0.621838 -0.27905 0 0.551642 0.399325 1 -0.551642 -0.399325 0 0.462331 0.500875 1
{ "domain": "datascience.stackexchange", "id": 1991, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "deep-learning, tensorflow, keras", "url": null }
python, clustering Well, I'm disappointed to see heapq.nsmallest() performed up to 40% worse that sorted on CPython, but I'm happy to see PyPy validates my assertion that you don't need to sort the entire list. Continuing with that thought, bisect.insort() may be used to maintain a list of the k-nearest neighbours so far: neighbours = [(float('inf'), None)] * k for pnt in points: dist = distance(pnt) if dist < neighbours[-1][0]: neighbours.pop() bisect.insort(neighbours, (dist, pnt)) counter = Counter(pnt.classif for dist, pnt in neighbours) This gave me 4% speedup over sorted()[:k] with your gist sample set. Significant, but not impressive. Still, it was enough encouragement to press on an look for other inefficiencies. How about the distance() code. It gets called a lot; can we speed it up? Sure! def predict(target: Coordinates, points: Sequence[KNNPoint], k: int, *, sum=sum, zip=zip) -> str: def distance(p: KNNPoint) -> float: return sum((a - b) ** 2 for a, b in zip(target, p.coords)) # ...
{ "domain": "codereview.stackexchange", "id": 31222, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, clustering", "url": null }
gravity, particle-detectors Title: CERN projects on gravity Recently I was reading about CERN's upgrade to work on gravitational theories. But if most of the work has been done by General Theory of Relativity than with other theories are there that need to be tested. But gravity term is used in terms of heavenly bodies than how will they simulate it using atomic level particles as both follow entirely different postulates ? There exists an experiment at CERN, an international collaboration called ALPHA which at the moment aims to check for differences between particles and antiparticles. In a news bulletin the intent is stated to extend the experiment : Though ALPHA-2 has only just arrived, discussions have already begun on a possible new experiment for the collaboration: ALPHA-3, which would investigate the properties of gravity. In order to make space for this possible expansion, a new platform was created over the experimental area for the ALPHA-2 electronics There is no proposal pending for studying gravitational differences between hydrogen and antihydrogen. From their web page: Eventually, we will use this technique to compare the structure of antihydrogen and hydrogen atoms, to search for difference between matter and antimatter, but In this first experiment, we do not yet have enough precision to test these fundamental symmetries. This is important, as the Universe has shown a preference for matter over antimatter as it has evolved, but so far, no measurements can explain why this came about. If matter and antimatter were truely identical, the Universe as we know it could not have come about. The next step at ALPHA is to construct an apparatus that will allow us to make these more precise measurements, using both microwave radiation, and laser light.
{ "domain": "physics.stackexchange", "id": 5736, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, particle-detectors", "url": null }
machine-learning, python, scikit-learn, feature-extraction input_count1 = layers.Input(shape=(10,), name='in1') input_count2 = layers.Input(shape=(10,), name='in2') input_count3 = layers.Input(shape=(10,), name='in3') # input size is reflection of the vocab size embed1 = layers.Embedding(5, 8)(input_count1) embed2 = layers.Embedding(3, 8)(input_count2) embed3 = layers.Embedding(3, 8)(input_count3) combine = layers.Concatenate()([embed1, embed2, embed3]) model = Model(inputs=[input_count1, input_count2, input_count3], outputs=combine) output = model.predict({ 'in1': keywords1, 'in2': keywords2, 'in3': keywords3, })
{ "domain": "datascience.stackexchange", "id": 7312, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, python, scikit-learn, feature-extraction", "url": null }
c, sorting, insertion-sort Is my implementation of insertion sort correct ? Also I would appreciate if you could compare my implementation to the one made by the source I'm learning from, any efficiency differences or any other thing you think would worth mentioning. Your implementation is not quite correct. It is subtle, but what you have is a type of bubble sort in that you do not insert the new value, rather you 'slide' the value in to place. An insertion sort can be thought of as a 'hole'. You shift all the 'bigger' values to the right by one space, and create a hole at the point where the value should be inserted. Then you insert the value in to the hole. There should not be a concept of a 'swap' in an insertion sort. You start at the first unsorted value, and then if the previous values are smaller, you move them up one, until you have the space at the right spot. Then you put the value there. The example code makes this 'obvious' (but not really), in that it always compares against value and not arr[j+1]. It also has only a single 'assignment' to the array for each time in the loop. Your swap routine does two assignments on each loop. So, No, your implementation of an insertion sort is not quite correct. It is a correct sort, but not a 'text-book' insertion sort since it slides, rather than inserts the value.
{ "domain": "codereview.stackexchange", "id": 9444, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, sorting, insertion-sort", "url": null }
quantum-field-theory, hilbert-space, fermions, complex-numbers, grassmann-numbers Title: Grassmann numbers in the dual space I'm reading the section on Grassmann numbers in QFT for the Gifted Amateur and I'm confused by something said therein: First, they define a coherent state for fermions $\rvert \eta \rangle$ as \begin{align} \rvert \eta \rangle &= e^{-\eta \hat{c}^\dagger} \rvert 0 \rangle \\ &= \rvert 0 \rangle - \eta \rvert 1 \rangle \tag{28.12} \end{align} where $c^{\dagger}$ is the fermion creation operator and $\eta$ is a Grassmann number. Also, $$\hat{c}\rvert \eta \rangle = \eta \rvert \eta \rangle. $$ Now here's the part that confuses me: We can also define a state $\langle\bar{\eta} \lvert \hat{c}^{\dagger} = \langle \bar{\eta} \lvert \bar{\eta}$ where $$ \langle \bar{\eta} \lvert = \langle 0 \lvert - \langle 1 \lvert \bar{\eta} = \langle 0 \lvert + \bar{\eta} \langle 1 \lvert. \tag{28.15} $$ Note that $\bar{\eta}$ is not the complex conjugate of $\eta$ and $\langle \bar{\eta} \lvert $ is not the adjoint of $\rvert \eta \rangle$. With these definitions it follows that the value of an inner product is $$ \langle \bar{\zeta} \lvert \eta \rangle = e^{\bar{\zeta}\eta}. \tag{28.16} $$
{ "domain": "physics.stackexchange", "id": 26788, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, hilbert-space, fermions, complex-numbers, grassmann-numbers", "url": null }
homework-and-exercises, kinematics Title: Calculating how long does a hare remain stationary in a race I'm solving book exercises - there's a classic problem involving a hare and a tortoise. This is what I did, but apparently the final answer is wrong. Basically it's a 1000m race, both animals have a constant speed (hare at 8m/s and tortoise at 0.2m/s). The hare runs 800m and then stops to tease the tortoise. Then, at some point in the future, the hare resumes and both animals finish at the same time. (a) How far is the tortoise from the finish line when the hare resumes the race? Well, since both animals finish at the same time, I just need to find out how long it takes the hare to finish the remaining 200m and then check how much distance could the tortoise cover in that time. So $$\frac{200\text{m}}{8\text{m/s}} = 25\text{s}$$ Meaning that the tortoise could cover barely $$25\text{s}\cdot0.2\text{m/s} = 5\text{m}$$ This appears to be correct according to the book. (b) For how long in time was the hare stationary?
{ "domain": "physics.stackexchange", "id": 43350, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, kinematics", "url": null }
javascript, angular.js, dependency-injection, controller, data-visualization The function starts with using Filter and ends with using Filter, and it doesn't use it anywhere in between: Filter.clearFilters(); // ... Filter.setReports(self.currentReports); Does this ordering matter? Will your program still work if you move the Filter.setReports call right after Filter.clearFilters? If not, then these calls will be good to group together. It will be even better if you move them to a dedicated function whose responsibility is to clear filters and set the reports, perhaps something like resetFilters(Filter). And move the call to this new function outside of updateBuildChart, where it really doesn't belong. What is this for? this.currentReports = $scope.$parent.reports; Is it just to be a shortcut for $scope.$parent.reports in the rest of the function, or will this have a side effect outside this function? If it's the first case, then it would be better to use var currentReports = instead. PagerFactory is only referenced in one place: $scope.pager = PagerFactory.Create({ items: this.currentReports }); Similar to the treatment of Filter earlier, it would be better to move this into a dedicated function, say updatePager, and call it outside of this function. What is this for? this.originalReports = angular.copy(this.currentReports); Since this.originalReports is not referenced again within the function, it would seem that this produces a side effect outside the function. Or else, it's pointless and should be deleted. The only references to LimitsManager: LimitsManager.calcFteLimits(); What is it doing there in the middle of the function? Does it need to be in the middle? Can it be at the start or at the end? Or does it have a side effect that makes a difference for the code that appears before or after it? If the code dealing with ChartBuilder is moved out to its own method, then this part can be simplified:
{ "domain": "codereview.stackexchange", "id": 14000, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, angular.js, dependency-injection, controller, data-visualization", "url": null }
c, security, linux, wrapper, child-process Title: Running shell script as root via external binary I'm not an experienced Linux user and I wanted an easy way to run shell scripts as root from a PHP script, I came up with this: #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/stat.h> #include <unistd.h> #include <string.h> #include <strings.h> int main(int argc, char *argv[]) { if (access(argv[1], F_OK) != -1) { struct stat filestat; if (stat(argv[1], &filestat) == 0) { if ((filestat.st_uid == 0) && (filestat.st_gid == 0) && (filestat.st_mode & S_IXUSR) && (!(filestat.st_mode & S_IWOTH))) { char* match = strrchr(argv[1], '.'); if ((match != NULL) && (strcasecmp(match, ".sh") == 0)) { if (setuid(0) != -1) { execl("/usr/bin/sudo", "/usr/bin/sudo", argv[1], (char*) NULL); return 0; } } } } } return 1; } Any potential security issues with this? If so how can I improve it? Example Usage: Lets say I have a script located at: /some/path/script.sh With the following in it: #!/bin/bash echo $USER Now lets say I compile the above C code to a binary and place it at: /some/path/run-as and do: chown root:root /some/path/run-as chmod 6755 /some/path/run-as
{ "domain": "codereview.stackexchange", "id": 31704, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, security, linux, wrapper, child-process", "url": null }
ros, ros2, meta Title: How can I add 'foxy' as an acceptable version-tag on awswers.ros.org The Instruction for tags is: "You must at least tag the rosdistro you are using, such as indigo, kinetic, lunar, melodic, or ardent." The error text if you fail to comply is: "At least one of the following tags is required : boxturtle, cturtle, diamondback, electric, fuerte, groovy, hydro, indigo, jade, kinetic, lunar, melodic, noetic, r2b3, ardent, bouncy, crystal, dashing, eloquent, ros1 or ros2" shouldn't theses match? and why no love for Foxy? :) Originally posted by dawonn_haval on ROS Answers with karma: 103 on 2020-08-07 Post score: 0 This is something that only the ROS Answers admins can do. This may change with the Askbot version, but there should be an list of "Mandatory tags" on this page: https://answers.ros.org/settings/FORUM_DATA_RULES/ Originally posted by chapulina with karma: 366 on 2020-08-07 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by dawonn_haval on 2020-08-07: How does one go about notifying an admin about such a thing? Is there an issue tracker somewhere? Comment by chapulina on 2020-08-07: Here it is: https://github.com/ros-infrastructure/answers.ros.org
{ "domain": "robotics.stackexchange", "id": 35384, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros2, meta", "url": null }
blast, phylogenetics Title: Timeout when downloading the ncbi nr blast database I am experiencing timeout problems when downloading the NCBI nr preformatted blast database using the update_blastdb script (version 504861). I run the script with the following paramters update_blastdb --decompress --passive --verbose nr and I get the following error message (in verbose mode) Downloading nr (45 volumes) ... Downloading nr.00.tar.gz...Net::FTP=GLOB(0x5610fb59b8f8)>>> PASV Net::FTP=GLOB(0x5610fb59b8f8)<<< 227 Entering Passive Mode (165,112,9,229,195,144). Net::FTP=GLOB(0x5610fb59b8f8)>>> RETR nr.00.tar.gz Net::FTP=GLOB(0x5610fb59b8f8)<<< 150 Opening BINARY mode data connection for nr.00.tar.gz (18745730730 bytes) Net::FTP: Net::Cmd::getline(): timeout at /usr/share/perl/5.26/Net/FTP/dataconn.pm line 82. Unable to close datastream at /usr/bin/update_blastdb line 202. Net::FTP=GLOB(0x5610fb59b8f8)>>> PASV Net::FTP: Net::Cmd::getline(): unexpected EOF on command channel: Connection reset by peer at /usr/bin/update_blastdb line 203. Failed to download nr.00.tar.gz.md5! Net::FTP: Net::Cmd::_is_closed(): unexpected EOF on command channel: Connection reset by peer at /usr/bin/update_blastdb line 101. Net::FTP: Net::Cmd::_is_closed(): unexpected EOF on command channel: Connection reset by peer at /usr/bin/update_blastdb line 101.
{ "domain": "bioinformatics.stackexchange", "id": 1803, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "blast, phylogenetics", "url": null }
structural-engineering, civil-engineering Title: What is the technical name for temporary construction fences? My structural engineering friend and I passed a construction site years ago, and he called those temporary fences a technical name. Now I ask him the technical name he said those years and he doesn't even remember the moment, understandably, and is not sure there is even a technical name to begin with. I googled "what is the fence built around ongoing projects called" and 'temporary fencing" came up. This is my final straw to chalk it up to a distorted memory or finally learn that elusive name (every few months I remember that moment and still don't remember the name). So is there any technical name for temporary construction fences just as we have aggregates for sand, gravel, etc? If there is, please for the love of sanity, what is it? I would call this “site hoarding” (google images preview below - is this what you’re thinking of?)
{ "domain": "engineering.stackexchange", "id": 5443, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "structural-engineering, civil-engineering", "url": null }
quantum-mechanics, universe, space-expansion Matter does not do this. A classical particle has a fixed mass, and thus energy by $E~=~mc^2$. As a result the density of matter particles scales as $\rho~\propto~a^{-3}$. This does not scale in the same manner. A matter particle has a Compton wavelength $\lambda_c~=~\frac{\hbar}{mc}$, but just as with the classical particle the mass does not scale with the expansion. Hoyle and Bondi proposed an idea that particles would be created as the universe expands. While this would not cause the Compton wavelength to change it would result in an overall matter distribution that would scale as does radiation.
{ "domain": "physics.stackexchange", "id": 31398, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, universe, space-expansion", "url": null }
quantum-mechanics, homework-and-exercises, electromagnetism, schroedinger-equation, gauge-invariance $$ \frac{1}{2m} \left[ -\hbar^2\nabla^2-\frac{q\hbar}{i}( \nabla \cdot\vec{A} + \nabla^2\Lambda+ \vec{A} \cdot \nabla + \nabla \Lambda \cdot \nabla) + q^2[\vec{A}^2+2(\vec{A}\cdot \nabla \Lambda) + (\nabla \Lambda )^2]\right] e^{iq\Lambda/\hbar} \psi +qV e^{iq\Lambda/\hbar} \psi -qe^{iq\Lambda/\hbar} \psi \partial_t\Lambda $$ It is possible to observe that the last term in both (the right and left) sides cancel each other. Then, using: $\nabla ( e^{iq\Lambda/\hbar} \psi ) = e^{iq\Lambda/\hbar}\nabla\psi + \frac{iq}{h} \psi \nabla \Lambda$ $ \nabla^2 ( e^{iq\Lambda/\hbar} \psi ) =e^{iq\Lambda/\hbar} \nabla^2\psi + \frac{2iq}{\hbar}e^{iq\Lambda/\hbar}(\nabla \Lambda)(\nabla \psi) + \psi \frac{iq}{\hbar} e^{iq\Lambda/\hbar} \nabla^2 \Lambda - \frac{q^2}{\hbar^2}\psi e^{iq\Lambda/\hbar} (\nabla \Lambda)^2 $ we then obtain (by applying operators and canceling all the $e^{iq\Lambda/\hbar}$ ):
{ "domain": "physics.stackexchange", "id": 88485, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, electromagnetism, schroedinger-equation, gauge-invariance", "url": null }
javascript, jquery, object-oriented, functional-programming, revealing-module-pattern var getUri = function getUri (stockObj) { return 'http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo' + '.finance.historicaldata%20where%20symbol%20%3D%20%22' + stockObj.ticker + '%22%20and%20startDate%20%3D%20%22' + stockObj.startDate + '%22%20and%20endDate%20%3D%20%22' + stockObj.endDate + '%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys'; }; Since you said you used jQuery, consider using $.param to build the query portion of your url. It will do the proper escaping for you. getStockData doesn't look complete, but if you want to be functional and all that, the function must return something. $.getJSON returns a Promise-like object which you can listen to for resolution/rejection. You can also parse and return in a resolve handler to do additional logic before handing it off to the caller. You can use this to restructure the data, do additional computations etc. var getStockData = function getStockData (stockObj) { return $.getJSON(getUri(stockObj)).then(function (priceData) { var startDate = utility.formatForConsole(stockObj.startDate); var endDate = utility.formatForConsole(stockObj.endDate); var pricesArray = priceData.query.results.quote; var startingPrice = pricesArray[pricesArray.length - 1].Adj_Close; var endingPrice = pricesArray[0].Adj_Close; var percentChange = utility.getPercentChange(startingPrice, endingPrice);
{ "domain": "codereview.stackexchange", "id": 19486, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript, jquery, object-oriented, functional-programming, revealing-module-pattern", "url": null }
You can handle this with coupled recurrences. Let $$A(k)$$ be the number of words without $$AB$$ or $$BA$$ that end in a character other than $$A$$ or $$B$$. Let $$B(k)$$ be the number of words without $$AB$$ or $$BA$$ that end in $$A$$ or $$B$$. You can add any character to an $$A(k)$$ word, but only $$n-1$$ characters to a $$B(k)$$ word, one of which leaves it as a $$B(k+1)$$ word. The recurrences are $$A(k+1)=(n-2)A(k)+(n-2)B(k)\\ B(k+1)=2A(k)+B(k)\\ A(0)=1\\ B(0)=0$$ For small $$k$$ you can make a spreadsheet to do these. For large $$k$$ you can do the eigenvalue/eigenvector thing on the matrix $$\begin {pmatrix} n-2&n-2\\2&1 \end {pmatrix}$$ The leading eigenvalue is $$\frac 12\left(\sqrt{n^2+2n-7}+n-1\right)\approx n$$ when both $$k$$ and $$n$$ are large. You want $$A(k)+B(k)$$ I made a spreadsheet for $$n=10$$, shown below. In the line for $$n=2$$ the $$98$$ under $$A+B$$ shows there are $$98$$ two character strings that are not $$AB$$ or $$BA$$. As there are $$10^2=100$$ unrestricted strings and we rule out $$2$$, this is correct. The $$18\ B$$ strings have one of $$9$$ characters (not a $$B$$) then an
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9728307688581724, "lm_q1q2_score": 0.8229588241397409, "lm_q2_score": 0.8459424295406088, "openwebmath_perplexity": 151.19244953212979, "openwebmath_score": 0.612464964389801, "tags": null, "url": "https://math.stackexchange.com/questions/3964090/how-many-k-letter-words-are-there-such-that-the-letters-a-and-b-are-not-next-t" }
$$n\color{#c00}q = {n\choose m}^{\vphantom{|^{|^|}}}\in\Bbb Z,\,\ m\color{#c00}q= {n\!-\!1\choose m\!-\!1}\in\Bbb Z\,\ \Rightarrow\ (n,m)q = \dfrac{(n,m)}n\smash{\overbrace{{n\choose m}}^{\Large n\color{#c00}q}}\in \Bbb Z\quad$$ Remark $$\$$ Below are a few proofs of the Lemma on fractions. Recall $$\,(x,y):=\gcd(x,y)$$ $$(1)\$$ Recall that a fraction can be written with denominator $$\,n\,$$ iff its least denominator $$\,d\mid n.\,$$ Therefore $$\,m,n\,$$ are denoms $$\iff d\mid m,n\iff d\mid (m,n)\iff (m,n)\:$$ is a denom. $$(2)\ \ \dfrac{mc}d,\dfrac{nc}d\in\Bbb Z\iff d\mid mc,nc\iff d\mid (mc,nc)=(m,n)c\iff\! \dfrac{(m,n)c}d\in\Bbb Z$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9845754452025767, "lm_q1q2_score": 0.8368607486824143, "lm_q2_score": 0.849971175657575, "openwebmath_perplexity": 278.5833566559021, "openwebmath_score": 0.9662656188011169, "tags": null, "url": "https://math.stackexchange.com/questions/1165229/gcdn-m-over-nn-choose-m-is-an-integer" }
cmake Title: Adding external dependency (FLANN) to catkin package I'm trying to add FLANN (Fast Library for Approximate Nearest Neighbors) as a system dependency to a catkin package. I installed FLANN from the source as instructed on the website and then setup my ROS package's CMakeList.txt as follows: cmake_minimum_required(VERSION 2.8.3) project(nao_whole_body_ik) find_package(catkin REQUIRED COMPONENTS roscpp) find_package( PkgConfig REQUIRED) pkg_check_modules( flann REQUIRED flann ) catkin_package( INCLUDE_DIRS include LIBRARIES nao_whole_body_ik ) include_directories(include ${catkin_INCLUDE_DIRS}) add_executable(test_node src/test1_node.cpp) target_link_libraries(test_node ${catkin_LIBRARIES}) Then in my test1_node.cpp file I just have the test code given on their website, which consists of the following 2 include directives: #include <flann/flann.hpp> #include <flann/io/hdf5.h> The first include statement works fine. The second one produces a long list of linking errors complaining about undefined references to various functions. I'm not very good at configuring the CMakeLists file, but I suspect that I need to add hdf5 as a system dependency as well, but I can't figure out how to add multiple system dependencies. Simply writing pkg_check_modules( flann REQUIRED flann hdf5 ) doesn't work. I'm using Ubuntu 14.04 with ROS Indigo. Originally posted by Ali250 on ROS Answers with karma: 41 on 2016-01-19 Post score: 0 I was able to solve the problem by simply adding this line to CMakeLists.txt above find_package: FIND_LIBRARY(HDF5_LIBRARY hdf5 /usr/local/hdf5/lib/) and then after creating the executable: target_link_libraries(test_node ${CATKIN_LIBRARIES} ${HDF5_LIBRARY}})
{ "domain": "robotics.stackexchange", "id": 23492, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "cmake", "url": null }
c++, performance, beginner This uses the very new std::countr_zero from the <bit> header, a C++20 feature. There are other ways to write that too, for example _tzcnt_u64. unsigned(N) is a bit unfortunate, but required for countr_zero, which refuses to work on signed types. Preferably this would be avoided by making N a constant of type size_t instead of #define-ing it, which I recommend anyway. Using this trick, bitReversal has gone down from taking 17ms to 14ms (additionally, the memory used by the permutation vector is saved). It's a bit better, but nothing as great as improving the calculation of the roots of unity. This is not the best way to do it. The if corresponds to a badly-predicted branch, and the memory access pattern is semi-random. There are various useful papers about the technique I used here and further improvements, for example practically efficient methods for performing bit-reversed permutation in C++11 on the x86-64 architecture. The actual FFT The real meat of the algorithm. Since you asked in the previous question not to bother with suggestions for different algorithms, I won't. However, I can still suggest a performance improvement: use SSE3. SSE2 is actually sufficient, but SSE3 adds the ADDSUBPD instruction which is handy for complex multiplication: __m128d multiplyComplex(__m128d A, __m128d B) { __m128d ARealReal = _mm_shuffle_pd(A, A, 0); __m128d AImagImag = _mm_shuffle_pd(A, A, 3); __m128d BRealImag = B; __m128d BImagReal = _mm_shuffle_pd(B, B, 1); return _mm_addsub_pd(_mm_mul_pd(ARealReal, BRealImag), _mm_mul_pd(AImagImag, BImagReal)); } Which could be used like this: void unorderedFFT2(complexVec& input, const complexVec& table) { std::size_t k = 2;
{ "domain": "codereview.stackexchange", "id": 40746, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, beginner", "url": null }
electromagnetism, electric-fields, photonics Title: Transforming electric field in frequency domain to intensity in frequency domain I'm currently struggling to convert the electric field to the intensity in the frequency domain. In principle it seems like I need to do the following: $$ I(\omega)=\mathcal{F}[I(t)]=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}dt|\mathcal{R}(E(t))|^2e^{-i\omega t} $$ where $$ E(t)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}d\omega E(\omega)e^{i\omega t} $$ with the given electric field $E(\omega)$ in the frequency domain. This is in the first place not really doable analytically and also not very easy numerically. That's why I wanted to ask if there is a better way to do this transformation. It's actually remarkably easy. No need to go into the time domain. Use Ohm's law and the impedance of free space, $Z_0$ (about 377Ω). $$I(\omega)= E(\omega)^2 /Z_0 $$
{ "domain": "physics.stackexchange", "id": 92393, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electric-fields, photonics", "url": null }
quantum-mechanics, wavefunction, schroedinger-equation, potential, scattering Title: Finite square wall with $E > V_0$ I'm working through a problem for homework and feel as if there is a typo or I am confused. The problem is with a one sided finite square wall such as this: So the energy is more than $V_0$. I'm trying to show that the wave function for x > 0 is equal to $Ae^{ikx}$ but I feel like that is a typo. I got to the solution being $Ae^{ikx} + Be^{-ikx}$ from solving Schrödinger's equation but I'm not sure how to remove the second part of this. The problem is considering an incoming right-mover for $x<0$ and asks how it scatters off a step potential into a reflected outgoing left-mover for $x<0$ and a transmitted outgoing right-mover for $x>0$. The last possibility -- an incoming left-mover for $x>0$ -- is not present in this scattering experiment. That's the answer to OP's question. By the way, if it sounds strange that we can identify solutions of the time-independent Schrödinger equation (TISE) as incoming and outgoing movers which move in time, check out this Phys.SE post.
{ "domain": "physics.stackexchange", "id": 25371, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, wavefunction, schroedinger-equation, potential, scattering", "url": null }
java, performance, beginner public double getXPosition() { return xPosition; } public double getYPosition() { return yPosition; } public double getParticleRadius() { return particleRadius; } public BrownianParticle updatePosition(double xIncrement, double yIncrement) { double random = Math.random(); if (random < 1.0/8.0) {xPosition -= xIncrement; yPosition += yIncrement;} else if (random < 2.0/8.0) {yPosition += yIncrement;} else if (random < 3.0/8.0) {xPosition += xIncrement; yPosition += yIncrement;} else if (random < 4.0/8.0) {xPosition += xIncrement;} else if (random < 5.0/8.0) {xPosition += xIncrement; yPosition -= yIncrement;} else if (random < 6.0/8.0) {yPosition -= yIncrement;} else if (random < 7.0/8.0) {xPosition -= xIncrement; yPosition -= yIncrement;} else if (random < 8.0/8.0) {xPosition -= xIncrement;} return new BrownianParticle(xPosition, yPosition, particleRadius); } } Here is my data-type implementation for drawing Brownian particles: import java.awt.Color; public class BrownianParticleDraw { private final BrownianParticle particle; public BrownianParticleDraw(BrownianParticle particle) { this.particle = particle; } public void draw() { StdDraw.setPenColor(StdDraw.GRAY); StdDraw.filledCircle(particle.getXPosition(), particle.getYPosition(), particle.getParticleRadius()); }
{ "domain": "codereview.stackexchange", "id": 39479, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, performance, beginner", "url": null }
reinforcement-learning, monte-carlo-methods, sutton-barto In addition, if you are looking at the bottom edge of the chart, this represents the player starting with two aces. If you pick on one of the high points (dealer shows 4), then that also reduces the probability of seeing that particular state. So you are looking at a sample size of typically 4-5, but maybe in this case just one sample or maybe two, which the player then happened to go on to win, even though the odds made it unlikely. There is always some chance of winning and dealer showing 4 is a bad start for the dealer, who has a good chance of going bust provided the player does not. If it hadn't happened this time for the "two aces + dealer showing 4" state, it may have happened for "two aces + dealer showing 5" state. That's due to the nature of random sampling - if you have hundreds of states to sample, then a few of them are going to behave like outliers purely by chance until you have taken enough samples. In short, 10,000 randomly sampled games are nowhere near enough to reduce error bounds on the value estimates to reasonable numbers for special cases such as starting with two aces. However, you can see in the 10,000 samples charts the beginnings of convergence, especially elsewhere in the chart. From the graph, for 10000 episodes what i see is that when we don't have a usable ace we always lose the game except if the sum is 20 or 21. Actually you don't see that, the expected result is not quite -1.0, but a little higher. So that means there is still a chance to win. Under this player policy, the worst chance is no usable ace and score 19, because the policy will be to "hit" and need an Ace or 2 card just to stay in the game. Even then , the value is not quite as low as -1.0, but more like -0.9
{ "domain": "ai.stackexchange", "id": 1690, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "reinforcement-learning, monte-carlo-methods, sutton-barto", "url": null }
homework-and-exercises, electromagnetism, special-relativity, gauge Now, the integral transform may be expressed by an operator ${\mathcal D}$ that is translationally symmetric. It therefore commutes with the operator of the four-divergence $$\nabla_4\cdot\{ V\epsilon_0,\vec A/\mu_0 \}\equiv \frac{\nabla\cdot \vec A}{\mu_0}+ \frac{\partial V\epsilon_0}{dt}=\dots$$ may be written, because $V\epsilon_0={\mathcal D}\rho$ and $\vec A/\mu_0={\mathcal D}\vec J$, as $$\dots = \nabla_4\cdot {\mathcal D} \{\rho,\vec J\} = {\mathcal D}\nabla_4\cdot \{\rho,\vec J \} = 0$$ but this vanishes because of the continuity equation for the sources (even before we act on it with ${\mathcal D}$). The proof may be rewritten without unconventional "operators" but I think that its idea is much more transparent in this form.
{ "domain": "physics.stackexchange", "id": 26151, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, electromagnetism, special-relativity, gauge", "url": null }
$$\text{Therefore the angle bisectors of DPC and AQD are perpendicular. QED}$$ [ Well that is assuming I have not put letters in stupid places anyway ] Melody  Nov 13, 2017 #2 +78753 +1 That is excellent, Melody!!!!.....I did not know the thing about the exterior angle of a cyclic quad = the opposite interior angle.....I'm going to have to prove that to myself.....LOL!!!!! I'm adding this one to my "Watchlist".......it's a very nice one  !!!!! CPhill  Nov 13, 2017 #3 +91049 +2 Thanks for complementing me on my answer Chris. I'm sure you know better than most that when we put a lot of effort into an answer like this we do want someone to notice. I mean we get our own satisfaction but still it is nice if we have a small appreciative audience as well. You provide so many excellent geometry answers, I did not think you would notice this one. ".I did not know the thing about the exterior angle of a cyclic quad = the opposite interior angle" I kind of extrapolated that ... One of the most important features of a cyclic quad is that opposite angles are supplementary. I was going to use that. But then I realised that since this is true it means, by extension, that the exterior angle of a cyclic quad is equal to the opposite internal angle. It just meant that I could skip one step in the proof, that is all. :) ------------------------------------ With these proofs I am sometimes a bit confused about when I should use the 'congruent to' sign and when I should just use the equal sign. Do you have any confusion over this? ----------------------------------- Did you see Rosala's question about series that are combinations of Arithmetic Progressions and Geometric progressions? She asked for the formula derivation to be explained and then she had 2 questions using it.
{ "domain": "0calc.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9902915230887367, "lm_q1q2_score": 0.8096370732682795, "lm_q2_score": 0.8175744761936437, "openwebmath_perplexity": 1351.8446031060219, "openwebmath_score": 0.8284313678741455, "tags": null, "url": "http://web2.0calc.com/questions/circles-and-angles" }
quantum-mechanics, quantum-spin, complex-numbers Here is my understanding of what each of the terms mean in the above quote; $ \sigma_z $ is the probability of +/- spins detected along the z axis. $ \alpha_{u} $ is the component of the state vector in the up direction, which is a positive spin on the z axis. $ \alpha_{d} $ is the same but for a negative spin on the z axis (d is for down). The asterisk denotes the complex conjugate. Given that $ \sigma_z $ is 0.5 for both spins, doesn't it also follow that the amplitude of each $\alpha$ (ie $\alpha^*\alpha$) is 0.5 as well? Ie. Isn't it true that $$ \alpha_{u}^{*}\alpha_{u} = \alpha_{d}^{*}\alpha_{d} $$ As I understand, it is also the case that $$ \alpha_{u}^{*}\alpha_{u} + \alpha_{d}^{*}\alpha_{d} = 1 $$ Therefore $$ \alpha_{u}^{*}\alpha_{u} = \alpha_{d}^{*}\alpha_{d} = 0.5 $$ The last sentence in the quote seems to state that my understanding is incorrect. Why? A state with definite “up” spin along $+x$ can be expanded in terms of states with spin along $\pm z$ as $$ \vert +\rangle_x =\frac{1}{\sqrt{2}}\left(\vert +\rangle_z+\vert -\rangle_z\right) $$ A state with definite “up” spin along $+y$ can likewise be expanded as $$ \vert +\rangle_y =\frac{1}{\sqrt{2}}\left(\vert +\rangle_z+i\vert -\rangle_z\right) $$
{ "domain": "physics.stackexchange", "id": 51013, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-spin, complex-numbers", "url": null }
electrostatics, electric-fields, charge, potential, vectors In this trivial case work is $\frac{kdq}{r}$. What will we see on choosing path along x axis? Work will be the same, but which forces will lead to it? $E_y$ doesn't contribute into the work, since it's normal to our shift, so we need to integrate only $E_x$. So integral of $E_x$ along x in this situation will give us the same $\frac{kdq}{r}$. But this is exactly how you were going to calculate potential in the original task. Ok, back to the original task. Now we know, that integrating only $E_x$ along x axis gives us $\frac{kdq}{r}$, so we can use this formula in calculation. Generally speaking, math proves, that we can ALWAYS use this formula, because potential is additive since electric field is additive. But this time we explicitly proved it to adjust our intuitive view of the task, which was originally against this formula. UPD: So the unclear question is why $ \int E_xdx = \frac {kq}{r} $ for moving along x to infinity. This result arises from fact, that electric field is conservative (so amount of work depends only on the endpoints of that path, not the particular route taken) while we know this work for different (trivial) path. However, this result looks weird to you, isn't it? You expect integration of $E_x$ instead of full $E$ to give lesser final value. General reason for these integrals to be the same is that replacement of $E$ with $E_x$ is compensated by differences in path. Well, it can be checked directly by solving integral, but I have another pict to illustrate the differences more friendly.
{ "domain": "physics.stackexchange", "id": 37645, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, electric-fields, charge, potential, vectors", "url": null }
python, performance, python-2.x, mathematics becomes one of these two: listcheck = filter(lambda powers: len(powers) == 3, listcheck) listcheck = [powers for powers in listcheck if len(powers) == 3] which seem to be about the same speed. where the latter should be slightly faster, because the filter has to take a lambda instead of a predefined function. But using the fact that the only two possible lengths are 0 and 3 and bool(0) == False and bool(3) == True we can just use listcheck = filter(len, listcheck) in this case. function powers You compute for example b**2 more than once. It saves quite some time if you save b2 = b**2 (and similar for the other variables) at appropriate places. int() already performs floor so it is not needed here. (int(3.14) == 3 and int(3.99) == 3) You can collect what to write to the output file and write it in one go. This should be faster than repeated opening, writing and closing of the file. For this the function checksquare needs to be adapted to return the values instead of writing it: if listcheck: return listin[0], dictofnos, listcheck and in powers we add a list to collect the return values: def powers(limit): out = [] .... if temp >= 8: squares = checksquare(templist) if squares: out.append(squares) .... return out Additionally, we can put the writing part to a new function, separating the calculation and output part: def write_powers(n): with open("output2.txt", "w") as out_file: for power in powers(n): out_file.write("\n{}\n{}\n{}\n".format(*power))
{ "domain": "codereview.stackexchange", "id": 21080, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, performance, python-2.x, mathematics", "url": null }
ros, ros-melodic, catkin, ubuntu, ubuntu-bionic -- Configuring incomplete, errors occurred! See also "/home/notsotechnical/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/notsotechnical/catkin_ws/build/CMakeFiles/CMakeError.log". Makefile:334: recipe for target 'cmake_check_build_system' failed make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed notsotechnical@notsotechnical-VirtualBox:~/catkin_ws$ Originally posted by notSoTechnical on ROS Answers with karma: 7 on 2019-09-24 Post score: 0
{ "domain": "robotics.stackexchange", "id": 33818, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-melodic, catkin, ubuntu, ubuntu-bionic", "url": null }
special-relativity Title: Looking for specific Relativity example Many years ago (in the '70s I think) I read an explanation of the meaninglessness of simultaneity at large distances. The example had to do with two people walking along a sidewalk in opposite directions, and an alien race on a planet millions of light-years away planning an invasion of the Solar System. The example showed that in one walker's reference frame the invasion fleet had departed, but in the other reference frame the fleet had not. At the time, the explanation made perfect sense, but I have forgotten the details and have never run across this example again. Does anybody know where this was, or have the text of the explanation? I think you are talking about the Rietdjik-Putnam argument.
{ "domain": "physics.stackexchange", "id": 1635, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "special-relativity", "url": null }