text
stringlengths
49
10.4k
source
dict
gravity, black-holes, gravitational-waves, gravitational-collapse The same is true for any higher multipole moments that could source the generation of gravitational waves. (Consequently, this also means that the perfectly spherical collapse must result in a black hole with mass $M$.)
{ "domain": "physics.stackexchange", "id": 67367, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gravity, black-holes, gravitational-waves, gravitational-collapse", "url": null }
particle-physics, fermions, majorana-fermions Title: How can one know if one has a Majorana fermion? If the Majorana fermion is a fermion that is it's own antiparticle and exactly the same as its fermion counterpart, then how do they know that it's not just a fermion? A Majorana fermion would be a fermion that is it's own anti-particle. This is a common trait for bosons - the photon, gluon, and Z bosons are all their own anti-particles. Obviously, a Majorana fermion must not interact through the electromagnetic force, that is, it must have zero charge. More technically, the wave equation that governs Majorana particles is a real wave equation. You can change a particle into its anti-particle by using complex conjugation (reversing the sign of complex numbers). Therefore, since it's a real wave equation, Majorana fermions would be their own anti-particles. So, I think you are misunderstanding something - a Majorana fermion is a label applied to a fermion that is its own anti-particle. It's not that fermions have a Majorana fermion that they are related to, it's just a name for particles that are equivalent to their anti-particles. For example, the neutrino may be a Majorana fermion. The way this can be tested is to see if double beta decay can occur without neutrinos. In double-beta decay, two neutrons in the nucleus are converted to protons, and two electrons and two electron antineutrinos are emitted. In neutrinoless double beta decay, the two neutrinos annihilate each other to produce two electrons, which is obviously only possible is the neutrino is a Majorana particle. This is a Feynman diagram of neutrinoless double beta decay. Some more information on the Wiki page: http://en.wikipedia.org/wiki/Majorana_fermion
{ "domain": "physics.stackexchange", "id": 4250, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "particle-physics, fermions, majorana-fermions", "url": null }
quantum-mechanics $$\hat{H}_{Electronic}=- \frac{\hbar^2}{2}\sum_{m=1}^{M}\frac{\vec{\nabla}_{\vec{r}_{m}}^{2}}{m_{e}^{}} + \frac{1}{2}\sum_{i\neq j =1}^{M}\frac{e^2}{|\vec{r}_{i}-\vec{r}_{j}|}- \sum_{i =1}^{M}\sum_{j=1}^{N}\frac{Z_{j}e^2}{|\vec{r}_{i}-\vec{R}_{j}|}+\frac{1}{2} \sum_{i\neq j =1}^{N}\frac{Z_{i}Z_{j}e^2}{|\vec{R}_{i}-\vec{R}_{j}|}$$ You want to solve the eigenvalue equation $$\hat{H}_{Total}\Psi(\{r\},\{R\})=E\Psi(\{r\},\{R,\}).$$ First solve for complete set of electronic adiabatic states parametrically dependent on nuclear geometry as : $$H_{Electronic}(r,R)\psi_{n}^{}(\{r\};\{R\})=e_{n}^{}(\{R\}) \psi_{n}^{}(\{r\};\{R\}).$$ Now the total molecular wave function can be expanded in terms of complete set of electronic wave functions as :
{ "domain": "physics.stackexchange", "id": 43836, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics", "url": null }
c, hash-map /*For testing...*/ void displayHashTable (hnode** hashArray, void (*printValue)(void* )){ int i; hnode *curr, *next; printf("%-20s%-20s\n", "hashTable-Key,", "hashTable-Value"); for (i = 0; i < HASHSIZE; i++){ curr = hashArray[i]; while(NULL != curr){ next = curr->next; printf("%-20s,",curr->name); printValue(curr->value); printf("\n"); curr = next; } } } hashTable.h #include <stdio.h> #include <string.h> #include <stdlib.h> #ifndef HASHTABLE_H_INCLUDED #define HASHTABLE_H_INCLUDED #define HASHSIZE 52 /* 26 for a-z and 26 for A-Z*/ /*struct hnode*/ struct hnode{ struct hnode *next; /*next entry in chain */ char* name; /*the key is a string(label or name) */ void* value; /*value can be any type or struct --> Generic */ }typedef hnode; /*createHnode*/ hnode* createHnode(hnode* next, char* name, void* value); /*hashfunction*/ int hashfunction(char* name); /*createHashTable*/ /*Allocate an array of pointers to item.*/ hnode** createHashTable(int hashsize); /*putValueForKey*/ int insertNameValue(hnode** hashArray, char* name, void* value); /*getValueByKey*/ void* getValueByName(hnode** hashArray, char* key);
{ "domain": "codereview.stackexchange", "id": 26988, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, hash-map", "url": null }
bioinformatics Title: Why my base-quality in alignment is not perfect? I have a fasta file with some DNA sequences. I would like to simulate next-generation sequencing reads from it. I'm doing it without any base error and mutation error. wgsim -e 0 -r 0 sequence.fa seq_0_1.fq seq_0_2.fq To my knowledge, this is a perfect simulation. Next, I give the paired alignments to bwa for alignment. bwa mem -M sequence.fa seq_0_1.fq seq_0_2.fq > P0.sam Now, I check the ASCII of base quality (column 11 in the SAM format, the specification is here). head -n 3 P0.sam | cut -f11 IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII IIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIII According to this page, the ASCII (in order of quality) is: !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ My experiment is supposed to be perfect, so my base-quality metrics are expected to appear near the right-end of the list (like x,y,z). However, my metric I is no-where near the top of the list. In particular, I'm unable to achieve anything from J to ~. Why? When fasta reads are aligned they are by default assigned the phred score of 40, which in phred+33 encoding is represented by I. Phred+33 uses ASCII characters from ! to I: !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI
{ "domain": "biology.stackexchange", "id": 4639, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "bioinformatics", "url": null }
c#, minesweeper <Button Content="" Click="Button_Click" FontSize="48" FontWeight="Bold"/> <Button Content="" Click="Button_Click" FontSize="48" FontWeight="Bold"/> <Button Content="" Click="Button_Click" FontSize="48" FontWeight="Bold"/> <Button Content="" Click="Button_Click" FontSize="48" FontWeight="Bold"/> </UniformGrid>
{ "domain": "codereview.stackexchange", "id": 22431, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, minesweeper", "url": null }
statistical-mechanics, operators, density-operator, approximations, stochastic-processes Title: Time-ordered matrix exponential quasi-static limit Define a matrix differential equation $$\dot{X}=A(t)X(t),$$ where $X=[x1,x2,...]^T$ is a 1D vector and $A(t)$ is a complex-valued time-dependent matrix. This system can be solved by $$X(t)= \mathcal{T}\exp\left[\int_0^t A(t')dt'\right] X(0),$$ where we introduce a time-ordered exponential. In general, this is an arduous task. For my specific case: $\bullet$ My time-dependent matrix has the form of $A(t)=A_0+\alpha(t)A_1$, where $\alpha(t)$ is an Orstein-Uhlenbeck process, or in other words, colored noise. This is not relevant (Itô calculus is not necessary here because of the finite correlation time of the process). $\textbf{If}$ the noise is slow enough such that $\alpha(t)\rightarrow \alpha$ (let's call it the static limit), then one can solve the matrix exponential without the time-ordering and everything is fine. I can solve for $X(t)$ and then, I can average over $\alpha$. My goal is now to go to a quasi-static limit, i.e. we keep the time-dependence on $\alpha(t)$ assuming this is still to be slowly varying. Now I do need to compute the time-ordered exponential. My question is: Is there any approximation I could use? Any techniques to solve it?
{ "domain": "physics.stackexchange", "id": 97959, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "statistical-mechanics, operators, density-operator, approximations, stochastic-processes", "url": null }
php, email, twitter-bootstrap, captcha </div> </div> <div class="col-md-6"> <div class="form-group"> <label for="message"> Message</label><?php print construct_error_html($message_error); ?> <textarea name="message" id="message" class="form-control" rows="10" cols="25" placeholder="Message"><?= isset($_POST['message']) ? htmlspecialchars($_POST['message']) : '' ; ?></textarea> </div> </div> <div class="col-md-12"> <button type="submit" name="submit" class="btn btn-primary pull-right" id="btnContactUs"> Send Message</button> </div> </div> </form> </div> </div> </div> </div> <?php } ?>
{ "domain": "codereview.stackexchange", "id": 22109, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "php, email, twitter-bootstrap, captcha", "url": null }
c++, qt DWORD getParentPID(DWORD pid) { HANDLE h = nullptr; PROCESSENTRY32 pe{0,0,0,0,0,0,0,0,0,{0,0}}; DWORD ppid = 0; pe.dwSize = sizeof(PROCESSENTRY32); h = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS, 0); if (Process32First(h, &pe)) { do { if (pe.th32ProcessID == pid) { ppid = pe.th32ParentProcessID; break; } } while (Process32Next(h, &pe)); } CloseHandle(h); return (ppid); } MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); customPlot = ui->tab; processCustomPlot = ui->tab_3; ui->tabWidget->setTabText(0, "Performance"); ui->tabWidget->setTabText(1, "Processes"); ui->tabWidget->setTabText(2, "Process performance"); resourceUsageLabel = ui->label_2; customPlot->addGraph(); processCustomPlot->addGraph(); processInfoTree = ui->treeWidget; process_ComboBox = ui->comboBox_2; process_ComboBox->setSizeAdjustPolicy(QComboBox::AdjustToMinimumContentsLengthWithIcon); // set selection mode processInfoTree->setSelectionMode( QAbstractItemView::SelectionMode::ExtendedSelection); // setup headers processInfoTree->setColumnCount(4);
{ "domain": "codereview.stackexchange", "id": 38036, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, qt", "url": null }
beginner, c, http, socket fileLength = statBuffer.st_size; return fileLength; } void composeHeader(char *header, httpRquest *request, char *filepath) { /* file name have to be shorter than MAX_FILENAME(1024) characters*/ char filename[MAX_FILENAME]; findFilename(filepath, filename); off_t fileLength; fileLength = getFileLength(filepath); if (fileLength < 0) errorExit("getting file length failed"); if (request->requireRange == REQUIRE_RANGE_FALSE) { sprintf(header, "HTTP/1.1 200 OK\r\n" "Accept-Ranges: bytes\r\n" "Content-Disposition: attachment; filename=\"%s\"\r\n" "Content-Length: %lld\r\n" "\r\n", filename, fileLength); } else if (request->requireRange == REQUIRE_RANGE_TRUE) { sprintf(header, "HTTP/1.1 206 Partial\r\n" "Accept-Ranges: bytes\r\n" "Content-Disposition: attachment; filename=\"%s\"\r\n" "Content-Range: bytes %lld-%lld/%lld\r\n" "Content-Length: %lld\r\n" "Content-Type: multipart/byteranges\r\n" "\r\n", filename, request->offset, request->end, fileLength, request->end - request->offset + 1); printf("offset: %lld\nend: %lld\n", request->offset, request->end); } puts(header); } #ifdef __APPLE__ void sendFile(char *filepath, int clientSocketFD, httpRquest *request) { FILE *file = fopen(filepath, "rb"); if (!file) errorExit("failed to open file");
{ "domain": "codereview.stackexchange", "id": 30199, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, c, http, socket", "url": null }
c#, game, comparative-review, state-machine, state var player = _playerRepository.CreatePlayer(_playerName, _password); _sessionInputHandlerFactory.ActivateLoggedOn(player); } } LoggedOnInputHandler internal class LoggedOnInputHandler : ISessionInputHandler { private readonly IMudConfiguration _configuration; private readonly IMudControl _mudController; private readonly Player _player; private readonly SessionInputHandlerFactory _sessionInputHandlerFactory; public LoggedOnInputHandler(Player player, IMudConfiguration configuration, IMudControl mudController, SessionInputHandlerFactory sessionInputHandlerFactory) { _player = player; _sessionInputHandlerFactory = sessionInputHandlerFactory; _configuration = configuration; _mudController = mudController; } public void ActivateHandler(IPlayerConnection playerConnection) { playerConnection.SendText($"Welcome to {_configuration.Name} Mud {_player.Name}, enjoy your visit.\r\n"); } public void ReceivedLine(IPlayerConnection playerConnection, string text) { switch (text) { case "shutdown": playerConnection.SendText("Shutting down!\r\n"); _mudController.Shutdown(); _sessionInputHandlerFactory.ActivateEndSession(); break; case "quit": _sessionInputHandlerFactory.ActivateEndSession(); return; default: playerConnection.SendText($"Sorry, I don't understand: {text}\r\n"); break; } } }
{ "domain": "codereview.stackexchange", "id": 22918, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, game, comparative-review, state-machine, state", "url": null }
c++, performance, strings, reinventing-the-wheel, memory-management char front() { return *str; } char back() { return str[length]; } char& operator[](int n) { return str[n]; } friend String& operator+(String lhs, String &rhs) { lhs += rhs; return lhs; } String& operator+=(String &right) { *this += right.str; return *this; } String& operator+=(char* right) { size_t toAdd = strlen(right); if (length + toAdd > capacity) { while (length + toAdd > capacity) { if (capacity == 0) capacity = 1; capacity *= 2; } str = (char*)realloc(str, (capacity + 1) * sizeof(char)); } memcpy(str + length, right, toAdd * sizeof(char)); length += toAdd; str[length] = 0; return *this; } String& operator+=(char right) { if (length + 1 > capacity) { while (length + 1 > capacity) { if (capacity == 0) capacity = 1; capacity *= 2; } str = (char*)realloc(str, (capacity + 1) * sizeof(char)); } str[length++] = right; str[length] = 0; return *this; } String& operator=(char* right) { length = strlen(right); size_t prevCap = capacity; while (length + 1 > capacity) { if (capacity == 0) capacity = 1; capacity *= 2; } if (capacity != prevCap) str = (char*)realloc(str, (capacity + 1) * sizeof(char)); memcpy(str, right, (length + 1) * sizeof(char)); return *this; } String& operator=(String &right) { *this = right.str; return *this; } bool operator==(String &right) { return strcmp(str, right.str) == 0; }
{ "domain": "codereview.stackexchange", "id": 23527, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, strings, reinventing-the-wheel, memory-management", "url": null }
Turning my comment into an answer: I don't know whether you'll think this is cheating, but assuming without loss of generality that $a\geq b$, then $$a^2+b^2 \geq a^2 = a^\alpha a^{2-\alpha} \geq a^\alpha b^{2-\alpha}.$$ Ask him to prove, using polar coordinates, that $$a^2+b^2\ge ka^{\alpha}b^{2-\alpha}\quad \forall 0\le\alpha\le 2, \forall a,b\ge 0$$ where $$k=\frac{2}{\left(\alpha^{\alpha}(2-\alpha)^{(2-\alpha)}\right)^{1/2}}.$$ (Which is the best constant possible.)
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9752018369215507, "lm_q1q2_score": 0.8229680560946327, "lm_q2_score": 0.8438951045175643, "openwebmath_perplexity": 574.7083437489499, "openwebmath_score": 0.9286733865737915, "tags": null, "url": "https://math.stackexchange.com/questions/1952598/proving-algebraically-a2b2-ge-a-alphab2-alpha-for-0-le-alpha-le2-a" }
# I feel like epsilon-delta is reversed The lim is about "when x approachs a, then y approachs L". Then, shouldn't the epsilon and delta be like "For all delta, no matter how small the delta is, you can always find an epsilon that makes ε < f(x)-L < ε"? But, the conventional explanation says like "for all epsilon, you find delta", which feels like to me, "when y approachs L, x goes to a". • You meant "-ε < f(x)-L < ε", right? "ε < ε" is always false. Apr 21 at 8:10 • You've got a good answer, but if it helps, the proposed condition is equivalent to "$f$ is bounded." As the initial quantifier, "for all $\delta > 0$" is harder to meet for large $\delta$ so it incentivizes "picking $\delta>0$ as large as possible." Similarly, putting "there exists an $\varepsilon>0$" second incentivizes "picking $\varepsilon$ as large as possible." Apr 21 at 12:21 This is a common misunderstanding, and the only response I can ever think of is to just examine what the definition is really getting at. The statement $$\lim\limits_{x\to a}f(x)=L$$ means that, when $$x$$ is close to $$a$$, $$f(x)$$ is close to $$L$$. So we want something like "if $$|x-a|$$ is small, then $$|f(x)-L|$$ is small." But then we have to decide what "small" means. If $$|x-a|$$ is smaller than some $$\delta>0$$, how small should $$|f(x)-L|$$ be?
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.957277806109987, "lm_q1q2_score": 0.8211223644751369, "lm_q2_score": 0.8577680995361899, "openwebmath_perplexity": 216.23857780038324, "openwebmath_score": 0.9001168012619019, "tags": null, "url": "https://math.stackexchange.com/questions/4432171/i-feel-like-epsilon-delta-is-reversed" }
dna Title: is having more copies of gene better then having it less? if it's why is it ? I was learning about elephants rarely getting cancer and learn that elephants have 40 copies of genes while human only has 2 copies of genes each from parents. so I wonder if it's better to have more copies of the gene then having 2 I was learning about elephants rarely getting cancer and learn that elephants have 40 copies of genes while human only has 2 copies of genes each from parents. Where did you read that? Please always include your sources. What you're saying is unclear but I suppose you are referring to ploidy variation among species and not to CNV (Copy Number Variation) for specific genes. It is also possible that you mistake alleles for genes but it is impossible to tell where your misunderstanding is without reading from your source. I will assume you talked about ploidy variation. Elephants are diploids, just like humans. So, the claim appears flat wrong. so I wonder if it's better to have more copies of the gene then having 2 There is no general rule. Because having a single working copy is often enough, high ploidy can be a pretty good defense against loss of function mutations but that's only valid for some time just after a recent polyplodization event as the relaxation of selection pressures against such mutations will lead these deleterious mutations to higher frequencies (at mutation - drift - selection equilibrium). Note that doubling or halving the number of gene copies often yield to issues in dosage of mRNA and protein expressed. Related to that you might to read about dosage compensation
{ "domain": "biology.stackexchange", "id": 8660, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "dna", "url": null }
quantum-mechanics, quantum-field-theory, mathematical-physics 1Okay, that's hyperbole - free theories fulfill them, and the work of Glimm and Jaffe contains a lot of well-defined interacting QFTs in two and three dimensions via a well-defined notion of the path integral in these cases. Unfortunately, this program of constructive field theory seems currently unable to be extended into higher dimensions, or to more "uncomfortable" theories like non-Abelian gauge theories.
{ "domain": "physics.stackexchange", "id": 40064, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, quantum-field-theory, mathematical-physics", "url": null }
c#, array Title: Find part of string in array, replace entire row with that part of string if found Following is the string: (I'm getting that from config file so its not constant): string sequence = "Concat({ACCOUNT_NUM},substring(FormatDate(yyyyMMddHHmmss,DateNow()) ,2,12), GetLast(GetNextSequence(seq_relation),1))"; It contains multiple custom methods and I want them somewhere in the same order as they appear in the above string. Following is the strategy I applied: string[] arbitrary = sequence .Split('(').ToArray(); string[] methodsNmore = arbitrary.Take(arbitrary.Length - 1).ToArray(); string[] array2 = methodsNmore.Where(strr => strr.Contains(',')).ToArray(); string[] methods = array2.Select(str => str.Substring(str.LastIndexOf (',') + 1, str.Length - str.LastIndexOf (',') - 1) ).ToArray(); for (int i = 0; i < methods.Length; i++) { string row = Array.Find(methodsNmore, item => item.Contains(methods[i])); int ii = Array.IndexOf(methodsNmore, row); methodsNmore[ii] = methods[i]; } The resulting array, methodsNmore, now contains only the names of methods in the same order as in above string sequence. Is there any other elegant way of doing it? You can use a regular expression: string[] names = Regex.Matches(sequence, @"([A-Za-z_]\w*)\(").Cast<Match>() .Select(m => m.Groups[1].Value).ToArray();
{ "domain": "codereview.stackexchange", "id": 8020, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, array", "url": null }
algorithm, c, sorting, linked-list, mergesort node->value = value; node->next = NULL; if (list->head) { list->tail->next = node; list->tail = node; } else { list->head = node; list->tail = node; } list->size++; } int linked_list_is_sorted(linked_list_t* list) { linked_list_node_t* node1; linked_list_node_t* node2; if (list->size < 2) { return 1; } node1 = list->head; node2 = node1->next; while (node2) { if (node1->value > node2->value) { return 0; } node1 = node2; node2 = node2->next; } return 1; } void linked_list_display(linked_list_t* list) { char* separator = ""; for (linked_list_node_t* node = list->head; node; node = node->next) { printf("%s%d", separator, node->value); separator = ", "; } } static linked_list_node_t* reverse(linked_list_node_t* head) { linked_list_node_t* new_head = head; linked_list_node_t* tmp_head; tmp_head = head; head = head->next; tmp_head->next = NULL; while (head) { tmp_head = head; head = head->next; tmp_head->next = new_head; new_head = tmp_head; } return new_head; } static linked_list_node_t* merge(linked_list_node_t* left_list, linked_list_node_t* right_list) { linked_list_node_t* merged_head = NULL; linked_list_node_t* merged_tail = NULL; linked_list_node_t* tmp_node;
{ "domain": "codereview.stackexchange", "id": 25994, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, c, sorting, linked-list, mergesort", "url": null }
ros, arduino, sonar Title: Anyone using SRF08 ranger on address other then 0xF8? I've got a single SRF08 ranger running on the supplied 0xF8 address, but I can't get it to work on the other addresses (I need several sensors). I have verified the ranger to be addressed to 0xE8, but the ranger does not reply to either the programmed address 0xE8, or the +4 address the author uses for subsequent calls. The author suggests: New_Address += 4; // offset address not sure why this is but it works for this address http://www.ros.org/wiki/rosserial_arduino/Tutorials/SRF08%20Ultrasonic%20Range%20Finder I have implemented a multi-ranger program in the past (on a 2620 PIC), and I didn't need to do any "+4" trick to get it working. It also worked on all six addresses. I have not yet "ported" my my code to the Atmel (I wanted to use the Sonar_srf08 library). There is a newer version of the library, but it's essentially the same, just added gain and range parameters to several wire.writes. http://playground.arduino.cc/Main/SonarSrf08 Any ideas or fixes? Thanks, Alan Originally posted by KM6VV on ROS Answers with karma: 191 on 2013-03-29 Post score: 0 I fixed the problem. The SonarSRF08 library did not shift the I2C address right by one, so the addresses sent to Wire were wrong. Wire.beginTransmission(address/2); Wire.requestFrom(address/2, numBytes); Now the +4 added to the address in the example (calling program) is not needed. I have now added code to cycle through my three SRF08 Sonar modules. Alan Originally posted by KM6VV with karma: 191 on 2013-03-30 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 13604, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, arduino, sonar", "url": null }
everyday-life Title: Is there a difference between forces due to pressure differentials and collision? Imagine moving at a constant speed a flat object straight through a fluid: it could be a paddle in the water, the wing of a plane in the air or a sail. Ignoring whatever pushes the object at its constant speed, with my shaky, at best, understanding of physics I think there are two forces that are applied to the object: one from the collision with the fluid (equal and opposite reaction and all that) and one arising from the pressure differential in front and behind of the object. Are the two just the same thing (from conservation of momentum?) or do they differ? And if they differ, which is usually bigger? They are the same. There is always at least a static pressure acting on the wings. This is the case for any object in a fluid. This is what leads to buoyancy for example. When you begin to move in the fluid; things change as the relative movement between the fluid and the surface you examine now becomes a factor. The object that you send through the fluid now has different pressures acting on it; because the air is now not just static; but able to "hit" the surface (and these will vary based on the geometry of the object). These interactions will also cause a change in the momentum of the air; and of the object itself. Basically, we design wings to push air the opposite direction we want our objects momentum; and the force of that generated momentum manifests itself as pressure gradients across the surface.
{ "domain": "physics.stackexchange", "id": 47376, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-life", "url": null }
quantum-mechanics, homework-and-exercises, integration Title: Calculating the expectation value for kinetic energy $\langle E_k \rangle$ for a known wave function I have a wavefunction ($a=1nm$): $$\psi=Ax\exp\left[\tfrac{-x^2}{2a}\right]$$ for which I already calculated the normalisation factor (in my other topic): $$A = \sqrt{\frac{2}{a\sqrt{\pi a}}} = 1.06\frac{1}{nm\sqrt{nm}}$$ What I want to know is how to calculate the expectation value for a kinetic energy. I have tried to calculate it analyticaly but i get lost in the integration: \begin{align} \langle E_k \rangle &= \int\limits_{-\infty}^{\infty} \overline\psi\hat{T}\psi \,dx = \int\limits_{-\infty}^{\infty} Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\left(-\tfrac{\hbar^2}{2m}\tfrac{d^2}{dx^2}Ax \exp \left[{-\tfrac{x^2}{2a}}\right]\right)\,dx =\dots \end{align} At this point I go and solve the second derivative and will continue after this: \begin{align}
{ "domain": "physics.stackexchange", "id": 8871, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, homework-and-exercises, integration", "url": null }
$$\operatorname{supp}(\mu) \subseteq (L\mu)^{-1}(\{m\}) \qquad \text{and} \qquad M \subseteq (L\mu)^{-1}((-\infty, m]).$$ To make further progress, we separate exceptional case from the general argument: 1. If it happens that $$\operatorname{supp}(\mu)$$ lies in a line, then it reduces to 1-d problem. In such case, it is not hard to check that $$\mu$$ must be of the form $$\mu = \frac{1}{2}(\delta_{x_0} + \delta_{x_1})$$ for some distinct points $$x_1$$ and $$x_2$$. In such case $$M$$ must be a compact subset of the line segment $$\overline{x_0x_1}$$. 2. Otherwise, $$(L\mu)^{-1}((-\infty, m])$$ is a strict convex set and $$(L\mu)^{-1}(\{m\}) = \partial (L\mu)^{-1}((-\infty, m])$$. This may be used to further restrict the possible form that $$\operatorname{supp}(\mu)$$ can take. 3. As a special case, consider the situation where the convex hull of $$M$$ is a convex polytope. Then $$\operatorname{supp}(\mu)$$ is supported on the vertex-set of that polytope. If $$x_1, \cdots, x_n$$ denotes these vertices, then the problem reduces to solving the system of linear equations $$L \mu = m\mathbf{1}, \qquad \mathbf{1}^{\mathsf{T}}\mu = \mathbf{1}$$
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9859363729567545, "lm_q1q2_score": 0.8128669949539545, "lm_q2_score": 0.8244619199068831, "openwebmath_perplexity": 94.88652348966326, "openwebmath_score": 0.994627058506012, "tags": null, "url": "https://math.stackexchange.com/questions/3116001/what-is-the-largest-possible-expectation-of-difference-between-two-i-i-d-random?noredirect=1" }
# Interpolation A standard idea in interpolation now is to find a polynomial pn(x) of degree n (or less) that assumes the given values; thus (1) We call. ## Presentation on theme: "Interpolation A standard idea in interpolation now is to find a polynomial pn(x) of degree n (or less) that assumes the given values; thus (1) We call."— Presentation transcript: Interpolation A standard idea in interpolation now is to find a polynomial pn(x) of degree n (or less) that assumes the given values; thus (1) We call this pn(x) an interpolation polynomial and x0, ‥‥, xn the nodes. And if ƒ(x) is a mathematical function, we call pn(x) a polynomial approximation of ƒ. We use pn(x) to get (approximate) values of ƒ for x’s between x0 and xn (“interpolation”) or sometimes outside this interval (“extrapolation”). 797 continued Lagrange Interpolation Linear interpolation is interpolation by the straight line through (x0, ƒ0), (x1, ƒ1); see Fig Thus the linear Lagrange polynomial p1 is a sum p1 = L0ƒ0 + L1ƒ1 with L0 the linear polynomial that is 1 at x0 and 0 at x1; similarly, L1 is 0 at x0 and 1 at x1. Obviously, This gives the linear Lagrange polynomial (2) 798 continued Fig. 428. Linear interpolation 798
{ "domain": "slideplayer.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9927672374804856, "lm_q1q2_score": 0.8022226552973304, "lm_q2_score": 0.8080672135527632, "openwebmath_perplexity": 1697.246070172658, "openwebmath_score": 0.8605360388755798, "tags": null, "url": "http://slideplayer.com/slide/3247988/" }
mass, astrophysics, exoplanets Title: Exoplanet Mass-Radius Diagram I'm currently studying the following diagram: But I'm not entirely sure I understand what's going on. Is it just, that most exoplanets discovered, is pretty much made up of Hydrogen and Helium ? And then a couple (Like around (1,1)) have the composition like the earth i.e. iron and such. Basically: The place the planet is located, does that just tell me what most of the planet is made of? And what is the difference between exoplanets and solar system planets? I thought exoplanets was planets orbiting a star. But maybe the solar system planets are planets within a system of many planets orbiting a sun, and exoplanet only one planet around one star? Exoplanet means those planets that arent in our solar system, and solar system planets are in our solar system . The two (blue)triangles must be earth and mars i suppose and purple points are exoplanets and yes most of the exoplanets are made of Hydrogen and helium and revolve around other stars from the graph given .
{ "domain": "physics.stackexchange", "id": 18555, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "mass, astrophysics, exoplanets", "url": null }
javascript Here it is in a jsfiddle: http://jsfiddle.net/aGq9q/13/ For starters, you can remove a lot of duplicate code from the populateBilling() function like this: function populateBilling(planName) { var options = { basic: { "Option" : ["$200/month for 1-yr", "$250/month"], "Value" : [300, 350] }, prime: { "Option" : ["$300/month for 1-yr", "$350/month"], "Value" : [400, 450] }, gold: { "Option" : ["$400/month for 1-yr", "$450/month"], "Value" : [500, 550] } } //RESET BILLING PERIOD OPTIONS select.options.length = 1; document.getElementById('payment-total').innerText = 0 + additional; document.getElementById('payment-rebill').innerText = 0 + additional; var data = options[planName]; if (data) { for (var i = 0; i < data.Option.length; i++){ var temp = document.createElement('option'); temp.value = data.Value[i]; temp.text = data.Option[i]; select.appendChild(temp); } } } Working demo: http://jsfiddle.net/jfriend00/e8629/ Here are a couple ideas when looking to simplify code:
{ "domain": "codereview.stackexchange", "id": 6144, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "javascript", "url": null }
% Print rank rankOfM = rank(M) • Have you computed any small examples, to get a feel for what's going on? – Gerry Myerson Sep 18 at 4:44 • @GerryMyerson This is a great suggestion! I tried to do it by hand, but the examples are fairly large. Maybe I could code something up in MATLAB. – Michael Wehar Sep 18 at 4:50 • Note: The question was modified several times until it got to its current form which most accurately captures what I'm looking for. Thank you! – Michael Wehar Sep 18 at 5:34 • It's not nice to change the question after someone has posted an answer. – Gerry Myerson Sep 18 at 5:50 • @GerryMyerson Thank you for the comment! My original post had some issues. I thought it was best to improve it, but it might have been better if I had posted a new question. The answer that was posted is valuable and still relevant. – Michael Wehar Sep 19 at 21:58 No, if $$k>3$$ then $$V$$ is not linearly independent (over any field). We can write the all-$$1$$s vector as two different linear combinations: $$\sum_r v_{2,r} = \sum_r v_{3,r}.$$ The same occurs if $$X$$ is any set containing at least two numbers. • This is a good point! Further, any idea on what bounds can be given for $dim(V)$? – Michael Wehar Sep 18 at 4:55 For $$n \ge 1$$, let $$k(n)$$ denote the minimum value of $$k$$ for which the associated $$V$$ has $$\dim(V) = n$$. Claim: For each $$\epsilon > 0$$, $$k(n) \ge (1-\epsilon)\sqrt{n\log n}$$ for all large $$n$$.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9539660923657093, "lm_q1q2_score": 0.8164529778265247, "lm_q2_score": 0.8558511506439707, "openwebmath_perplexity": 318.32853433815245, "openwebmath_score": 0.8228177428245544, "tags": null, "url": "https://math.stackexchange.com/questions/3360599/binary-vectors-defined-by-remainders-modulo-prime-numbers-what-is-the-dimension" }
machine-learning, deep-learning Title: What is the best ML/DL model to choose to calculate mobile network utilization increase Let’s imagine the following scenario: The marketing department decides to do the promotion next month and would like to give to every single mobile customer extra 10GB of data. Due to simplicity, I have the following 5 features: A: Amount of data volume downloaded by all customers per base station/transmitter (MB) B: Number of connected customers to the base station (#) C: Total duration of the connected customers to the base station (seconds) D: Radio resource utilization (%) E: Average throughput per customer (kbps) Considering the marketing requirements, I can work out that the average increase of data volume per base station is 10% (Feature A). The question is: What impact is this promotion going to have on the mobile network (every single base station), mainly on Radio resource utilization (Feature D) and the average throughput per customer (Feature E). I have tons of data for each base station from low to high traffic scenario, but I cannot train the model with 10% extra data volume traffic increase as this situation never happened in the network. It is still going to be "the peak data volume per Base station so far" + extra 10%. How can I train the model, if target label is unknown (peak data volume + extra 10% never happened on the particular base station and I cannot take the data/stast from a different base station as the traffic pattern is different)? It would be enough to point me into the direction, I can find more info and to study it further. Thanks. I do not see why having a percentage greater than 100% might be a problem to you? You might normalize it by dividing by 100, so 100% might become 1. That feature will be your label. I suggest you try several regression algorithms, with different settings, and choose the best model i.e. the model which provides the best result. You have a bunch of regressors in scikit learn, you may also try XGBoostRegressor or CatBoostRegressor.
{ "domain": "datascience.stackexchange", "id": 7496, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, deep-learning", "url": null }
neural-network, time-series, lstm, rnn There are two designs for two assumptions: Products are not related to each other. Therefore, each product could be modeled separately. That is, each timestamp is X(t) = [feature1, feature2] or, including the target, X(t)|y(t) = [feature1, feature2, target]. And we build a model for each product separately. In summary, LSTM receives two 1 x 3 sequences for t-1 and t, and outputs a 1 x 1 target for t + 1. In notation: $$(\overbrace{X_{t-1}|y_{t-1}}^{1 \times 3}, \overbrace{X_{t}|y_{t}}^{1 \times 3}) \rightarrow \overbrace{y_{t+1}}^{1 \times 1}$$ Products are related to each other, meaning product1 can help product2 to predict its target. For this, we just need to flatten the 3 x 2 matrix to a 1 x 6 vector, where the order of values does not matter. That is, X(t) = [product1_feature1, product1_feature2, ..., product3_feature2] or X(t) = [product1_feature1, product2_feature1, ..., product3_feature2] We can also add the targets, for example X(t)|y(t) = [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3] This way, dimension of each timestamp would be 9 (6 + 3), and a sequence of two timestamps would be [ [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3], # t-1 [product1_feature1, product2_feature1, ..., product3_feature2, target1, ..., target3] # t ]
{ "domain": "datascience.stackexchange", "id": 5002, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "neural-network, time-series, lstm, rnn", "url": null }
turing-machines, computability Title: Turing Machine to return all prime numbers My task is to design Turing Machine that ignores its input and returns all the prime numbers. I have some basic idea how to do that but I am not completely sure whether my approach is correct or not. So no matter what the input is, we should ignore it. I think it would be sufficient to add another tape with cells $1^*,2,\dots,n$. Now, I would use Sieve of Eratosthenes algorithm as follows: Move a head to the right until the head encounters an unmarked symbol. Mark the symbol with a star and write it down to another write-only tape representing the output, i.e. the symbol is a prime number. Now, for $n$ being prime, I am supposed to mark every $n$-th symbol with a star. I am not sure how to do it. Reset. For the third step, I think I should utilize another marking mechanism to denote a gap between the beginning and the prime resolved in that round. Then, I would be moving that "window" and whenever the beginning of the "window" would reach the prime, I would mark the symbol at the end of the "window" with a star. Not sure how to express this formally. Designing a Turing machine is very laborious, even for simple tasks. So I suggest you keep the algorithm as simple as possible, even if it is very inefficient. The very simplest prime generating algorithm (even simpler than the sieve of Eratosthenes) is as follow: Start with $n=2$ - this is prime. Set $j$ equal to $n$. Add $1$ to $n$ Set $k$ equal to $n$. Subtract $j$ from $k$. If $k$ is greater than $0$ go to step $5$. If $k$ equals $0$ then $n$ is not prime; go to step $2$. Subtract $1$ from $j$. If $j$ equals $1$ then $n$ is prime; go to step $2$. Go to step $4$.
{ "domain": "cs.stackexchange", "id": 14826, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "turing-machines, computability", "url": null }
quantum-mechanics, standard-model, stability, proton-decay https://en.wikipedia.org/wiki/Electron So the muon is unstable, because we have observed muons to decay, but the electron is stable, because we have never observed one to decay, but the electron does have a mean lifetime, 6.6*10^28 years. Even as per the SM, the contradiction is there. Do we only say that stability is boolean, because we have never observed the electron and proton (free) to decay, but we give them a mean lifetime? As per QM, it is all about probabilities, and nothing lasts forever. Even the stable particles ( electron and the proton) will have a mean lifetime. Does QM probabilities win or does the SM stability (boolean) definition win? Question: Is stability a boolean, that is, are we defining stability as the particles that have never been observed to decay (electron and free proton), and are we defining unstable the particles that have been already observed to decay? Or are we saying that QM is all about probabilities, and even the stable particles (electron, proton) do have a mean lifetime and will eventually decay? Now if the proton rich nucleus is unstable, and it is all about just probabilities as per QM, then everything, every quantum system (composite) is unstable. Period. That is a truly amazing exercise in non sequitur. To be frank, logic seems to have completely deserted your post in its entirety $-$ nothing seems to be logically connected to what comes before or after. So, let's start with the dictionary. Boolean variable: Any variable, from the domain of Boolean algebra, having one of only two values,
{ "domain": "physics.stackexchange", "id": 59456, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, standard-model, stability, proton-decay", "url": null }
discrete-signals, sampling $$X^F(\omega)(1-e^{-j\omega T})=0$$ Either $X^F(\omega)=0$, or $(1-e^{-j\omega T})=0$ and sinc $(1-e^{-j\omega T})=0\implies\omega T=2\pi k$ we know that $X^F(\omega\neq\frac{2\pi k}{T})=0$. This means that the CTFT of a periodic signal is discrete.
{ "domain": "dsp.stackexchange", "id": 8297, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "discrete-signals, sampling", "url": null }
radio-astronomy, radio-telescope, event-horizon-telescope data = (('Northern Extended Millimeter Array', (44.634, 5.908, 2550.)), ('IRAM 30 meter telescope', (37.066, -3.393, 2850.)), ('The Greenland Telescope now near Thule Air Base', (76.531, -68.703, 10.)), ('Combined Array for Research in Millimeter-wave Astronomy CARMA', (37.280, -118.142, 2196.)), ('Kitt Peak National Observatory 12 meter Submillimeter Telescope (SMT)', (1.9583, -111.5967, 2096.)), ('Mt. Graham International Observatory 12 meter ALMA prototye', (32.701, -109.892, 3191.)), ('The Large Millimeter Telescope Alfonso Serrano', (18.985, -97.315, 4600.)), ('ALMA', (-22.971, -67.703, 4800.)), ('Caltech Submillimeter Observatory', (19.823, -155.476, 4140.)), ('South Pole Telescope', (-90.0, 0.0, 2800.))) # https://eventhorizontelescope.org/array # https://astronomy.stackexchange.com/questions/26413/math-behind-a-uv-plot-in-interferometry datadict = dict(data) import numpy as np import matplotlib.pyplot as plt from skyfield.api import Topos, Loader, EarthSatellite from mpl_toolkits.mplot3d import Axes3D halfpi, pi, twopi = [f*np.pi for f in (0.5, 1, 2)] degs, rads = 180/pi, pi/180 km = 0.001 sites = [Site(a, *b) for a, b in datadict.items()]
{ "domain": "astronomy.stackexchange", "id": 3622, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "radio-astronomy, radio-telescope, event-horizon-telescope", "url": null }
bash, linux, installer AVAILABLE_KERNELS=$(ukuu --list) if ! echo "$AVAILABLE_KERNELS" | grep -oPzq 'Available Kernels.*?\n=+\n\K.*?(Installed|Running).*?\n'; then ukuu --install-latest else LATEST_KERNEL=$(echo "$AVAILABLE_KERNELS" | grep -oPz 'Available Kernels.*?\n=+\n\Kv.+?(?=\s.*?\n)') echo "Latest kernel ${LATEST_KERNEL} installed. Removing old kernels apart from ${FALLBACK_KERNELS[@]}." declare -a INSTALLED_KERNELS updateInstalledKernels echo "Installed kernels: ${INSTALLED_KERNELS[@]}." for INSTALLED_KERNEL in "${INSTALLED_KERNELS[@]}" do echo "$INSTALLED_KERNEL" if ! containsElement "$INSTALLED_KERNEL" "${FALLBACK_KERNELS[@]}" then if ! echo "$INSTALLED_KERNEL" | grep -q "$LATEST_KERNEL" then echo "${INSTALLED_KERNEL} is not fallback, nor latest. Removing it..." ukuu --remove "$INSTALLED_KERNEL" fi fi done <<< "$INSTALLED_KERNELS" fi The only thing I know now is grep -P and everything seems like a nail if you have a hammer. I hope that's an exaggeration. Otherwise, I don't know how you will understand a review ;-) Use more here-strings The script uses here-strings in only one place, when there are more places where it would be good to use. For example, instead of this: LATEST_KERNEL=$(echo "$AVAILABLE_KERNELS" | grep -oPz 'Available Kernels.*?\n=+\n\Kv.+?(?=\s.*?\n)')
{ "domain": "codereview.stackexchange", "id": 32291, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "bash, linux, installer", "url": null }
fluid-dynamics, acoustics Title: Whistle Physics I'm looking for a simple explanation of how a whistle operates. I know that forcing air over a sharp lip can set up a wave in a resonating cavity, but how? "Most whistles operate due to a feedback mechanism between flow instability and acoustics"--yes, but what does that feedback mechanism look like? I was surprised to be unable to find a basic diagram online demonstrating how a whistle operates. I did find lots of images like this: . . . but such images are unhelpful since they don't show exactly what's producing the oscillation! Let's consider the specific type whistle shown in the question. When we blow the whistle, air is forced to rush out through the narrow opening. The flow of air at the center of the stream is significantly faster than the neighboring air close to the main stream. If the air stream is easily deflected(unstable), vortexes are generated. If the same thing happens repeatedly, many more vortexes with similar properties will be generated. These vortexes cause air pressure to vary in a periodic way, so sound wave is produced. The frequency of this sound wave is related to the rate at which the vortexes are shed. Since the process is rather chaotic, many different rates or frequencies are produced at a time. As you can see in the picture, the stream is divided into two parts. One part coming out of the opening and the other part stays inside. Sound wave trapped inside will interfere with each others. If the frequency of sound doesn't match any of the resonant frequencies of the chamber, the waves will interfere destructively and vanish quickly. However if the frequency matches the resonant frequency of the cavity, the wave's amplitude will increase overtime. The rate of increasing will decreases as the amplitude builds up. Eventually it will reach a steady state. At this point the amplitude of sound wave is strong enough that the sound becomes very audible. The sound wave come out of the hole, get dispersed strongly, and finally reaches our ears. Some whistles have a small ball bounces around inside the cavity. The ball changes the shape of the cavity and at the same time the resonant frequencies. Thus it allows us to hear wider range of sound frequency.
{ "domain": "physics.stackexchange", "id": 57669, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "fluid-dynamics, acoustics", "url": null }
formal-languages, regular-expressions To be clear, by table-driven lexer, I mean something that converts the regexp to a DFA and then precomputes a table that represents the transition table. This is useful in cases where the regexp is fixed and known at compile time. For many regexps seen in practice, the size of the DFA is not too large and so this leads to a fairly efficient matcher algorithm.
{ "domain": "cs.stackexchange", "id": 16766, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "formal-languages, regular-expressions", "url": null }
general-relativity, universe So no, no one is at the "edge" of the universe. By the way, general relativity in no way requires homogeneity and isotropy. These are simply assumptions cosmologists make in order to take an utterly intractable problem (evolving the whole universe) and make it absurdly simple (see the FRW metric, which, although it may look complicated at first, is pretty much the most trivial thing you can do with general relativity). The homogeneous/isotropic assumptions, by the way, turn out to be justified on cosmological scales, though this was discovered only after the early days of GR-based cosmology, once we had very deep galaxy surveys.
{ "domain": "physics.stackexchange", "id": 6038, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "general-relativity, universe", "url": null }
algorithm, c, primes /* Function: isDifferenceOfPrimes(); */ char isDifferenceOfPrimes(unsigned int target, int* primes, unsigned int size){ int* ptrToArray = primes; unsigned int i, j; for (i = 0; i < size - 1; ++i){ for (j = i + 1; j < size; ++j){ if (target == ptrToArray[j] - ptrToArray[i]){ printf("%d = %d - %d", target, ptrToArray[j], ptrToArray[i]); return 1; } } } return 0; } #endif GoldBach.h #ifndef GOLDBACH_H #define GOLDBACH_H // probably all uppedBounds in the for loops could be doubled /* Function: First(); Test first hypothesis. */ void First(int* primes, unsigned int size, unsigned int upperBound){ unsigned int even; for (even = 4; even <= upperBound; even += 2){ if (isSumOfTwoPrimes(even, primes, size)){ printf("\nFirst Goldback's hypothesis not disproved!\n"); }else{ printf("\n?Exception: %d\n", even); } } } //----------------------------------------------------------- /* Function: Second(); Test first hypothesis. */ void Second(int* primes, unsigned int size, unsigned int upperBound){ unsigned int natural;
{ "domain": "codereview.stackexchange", "id": 21770, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithm, c, primes", "url": null }
machine-learning, neural-network, gradient-descent Title: How does Gradient Descent work? I know the calculus and the famous hill and valley analogy (so to say) of gradient descent. However, I find the update rule of the weights and biases quite terrible. Let's say we have a couple of parameters, one weight 'w' and one bias 'b'. Using SGD, we can update both w and b after the evaluation of each mini-batch. If the size of the mini-batch is 1, we give way to online learning. What if I do not want to use any of these methods and simply want to use "Gradient descent" in its entirety? What is the update rule in that case? To be more precise; at what step does w and b get updated? And at what step do we stop? That said, the elephant in the room is the initial value of w and b. What is the parameter for choosing the first values of w and b? Suppose you have a strictly convex function $f(x)$ that you'd like to minimize then to do using gradient descent you keep applying $$x_{i+1} = x_{i}-\lambda\frac{\partial f}{\partial x}$$ until convergence; that is when $x_i$ is very weekly changing or not changing at all because that implies that ${\partial f}/{\partial x}$ is zero or very close to zero in that neighborhood which further mathematically implies that you've reached the minimum. The same applies if $f$ was rather a function in many variables the gradient descent rule applies for each of them. Now in data science $f$ can be a function in many variables that also involves a sum, for instance $$f(\theta_1,\theta_0)=\sum _{i=1}^m(y_{i}-(\theta _1^{\:}x_i+\theta \:_0))^2$$ where$x_i$ and $y_i$ are drawn from some dataset of length $m$. In that case ${\partial f}/{\partial \theta_1}$ and ${\partial f}/{\partial
{ "domain": "datascience.stackexchange", "id": 10072, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, neural-network, gradient-descent", "url": null }
java, junit, assertions /* Accessors and Mutators */ /** * @return Get Address Id */ public BigInteger getAddressId() { return addressId; } /** * @param addressId Set Address Id */ public void setAddressId(BigInteger addressId) { this.addressId = addressId; } /** * @return Get Addressee Name */ public String getName() { return name; } /** * @param name Set Addressee Name */ public void setName(String name) { this.name = name; } /** * @return Get Full Postal Address */ public String getAddress() { return address; } /** * @param address Set Full Postal Address */ public void setAddress(String address) { this.address = address; } /** * @return Get Addressee Contact Number */ public String getContactNo() { return contactNo; } /** * @param contactNo Set Addressee Contact Number */ public void setContactNo(String contactNo) { this.contactNo = contactNo; } /** * @return Get Addressee postal code */ public String getPostalCode() { return postalCode; } /** * @param postalCode Set Addressee postal code */ public void setPostalCode(String postalCode) { this.postalCode = postalCode; } } AddressTest: /** * All tests for class Address * * @author Sandeep Chatterjee * @since 24/8/2015 */ public class AddressTest { /** * @see [http://www.jqno.nl/equalsverifier/] * @see [https://github.com/jqno/equalsverifier/blob/master/README.md] */ @Test public void equalsContract() { EqualsVerifier.forClass(Address.class) .suppress(Warning.NONFINAL_FIELDS, Warning.NULL_FIELDS) .verify(); }
{ "domain": "codereview.stackexchange", "id": 15347, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "java, junit, assertions", "url": null }
## Example 5 Again, Delaware, with an initial divisor of $$21,900.82927$$: $$\begin{array}{lrrc} \text { County } & \text { Population } & \text{ Quota } & \text{ Initial } \\ \hline \text { Kent } & 166,310 & 7.4111 & 7 \\ \text { New Castle } & 538,479 & 24.5872 & 25 \\ \text { Sussex } & 197,145 & 9.0017 & 9 \\ \textbf{ Total } & \bf{ 897,934 } & & \bf{ 41 }\end{array}$$ Solution This gives the required total, so we’re done. ## Example 6 Again, Rhode Island, with an initial divisor of $$14,034.22667$$: $$\begin{array}{lrrc} \text { County } & \text { Population } & \text{ Quota } & \text{ Initial } \\ \hline \text { Bristol } & 49,875 & 3.5538 & 4 \\ \text { Kent } & 166,158 & 11.8395 & 12 \\ \text { Newport } & 82,888 & 5.9061 & 6 \\ \text { Providence } & 626,667 & 44.6528 & 45 \\ \text { Washington } & 126,979 & 9.0478 & 9\\ \textbf{ Total } & \bf{ 1,052,567 } & & \bf{ 76 }\end{array}$$ Solution This is too many, so we need to increase the divisor. Let’s try $$14,100$$:
{ "domain": "libretexts.org", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9843363503693294, "lm_q1q2_score": 0.8137624764866885, "lm_q2_score": 0.8267118004748677, "openwebmath_perplexity": 802.752424850072, "openwebmath_score": 0.8139382600784302, "tags": null, "url": "https://math.libretexts.org/Bookshelves/Applied_Mathematics/Math_in_Society_(Lippman)/04%3A_Apportionment/4.04%3A_Websters_Method" }
gazebo Title: diff_drive skid steer, front wheels not turning/veering right Hi, I'm creating a URDF for my robot which is tracked and I'm trying to use the diff_drive plugin in gazebo to simulate it. I can control the robot using the keyboard teleop utility but it veers to the right and when I look at the wheels in Gazebo I can see that though the rear wheels are turning as expected the front wheels aren't, if anything they look like they're digging in. If I drive in reverse the opposite happens. The joints are all set to be continuous rather than revolute with a limit, I just can't figure this out and feel I'm bashing my head against the wall! Is the wheel not turning expected behaviour for diff_drive with skid steer emulation or have I messed up with the URDF?
{ "domain": "robotics.stackexchange", "id": 36934, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo", "url": null }
programming-challenge, perl, palindrome my ($a,$b,$x) = ($hi,$hi); for (;;) { $x = $a*$b; if ($x eq reverse $x) { print "$x is $a * $b" if ($x > $lm); last if ($n > 3); # show them all for small numbers } next if (!(--$a < $lo) & !(++$b > $hi)); # no short-cut operator here ++$a; --$b; print " edge $a $b" if ($n < 3); # show edge for small numbers $a += $b; $b = int --$a/2; $a -= $b; last if ($a*$b < $lm); } What you have shown is twisted spaghetti code full of misguided micro-optimizations. Your algorithm is not at all obvious. This is it cleaned up, so that we can analyze it: #!/usr/bin/env perl use strict; use warnings; use feature 'say'; my $n = 3; my $lo = 10**($n - 1); my $hi = (10**$n) - 1; my $smallest_product = 10**(2*$n - 2); my $index = 0; for (my $sum = 2*$hi; ; $sum--) { my $a = int $sum/2; my $b = $sum - $a; last if $a*$b < $smallest_product; while ($lo <= $a and $b <= $hi) { $index++; my $candidate = $a * $b; if ($candidate eq reverse $candidate) { say "$candidate = $a * $b (candidate $index)" if $candidate > $smallest_product; exit if $n > 2; $a--; $b++; } }
{ "domain": "codereview.stackexchange", "id": 12041, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "programming-challenge, perl, palindrome", "url": null }
ros, ros-kinetic, ubuntu, build, source -- Configuring incomplete, errors occurred! See also "/home/dh/build_isolated/class_loader/CMakeFiles/CMakeOutput.log". See also "/home/dh/build_isolated/class_loader/CMakeFiles/CMakeError.log". <== Failed to process package 'class_loader': Command '['/home/dh/install_isolated/env.sh', 'cmake', '/home/dh/src/class_loader', '-DCATKIN_DEVEL_PREFIX=/home/dh/devel_isolated/class_loader', '-DCMAKE_INSTALL_PREFIX=/home/dh/install_isolated', '-DCMAKE_BUILD_TYPE=Release', '-G', 'Unix Makefiles']' returned non-zero exit status 1 Reproduce this error by running: ==> cd /home/dh/build_isolated/class_loader && /home/dh/install_isolated/env.sh cmake /home/dh/src/class_loader -DCATKIN_DEVEL_PREFIX=/home/dh/devel_isolated/class_loader -DCMAKE_INSTALL_PREFIX=/home/dh/install_isolated -DCMAKE_BUILD_TYPE=Release -G 'Unix Makefiles' Command failed, exiting. One would think this would be easy to fix since I am installing on a clean system. However, I don't have any clue what is happening and how to resolve this error. Originally posted by Chrizzl on ROS Answers with karma: 48 on 2017-09-26 Post score: 0
{ "domain": "robotics.stackexchange", "id": 28924, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros, ros-kinetic, ubuntu, build, source", "url": null }
c++, c++11, console Two things to note here. First, I'm using C++17 and so std::string_view, allowing for this to be constexpr, but if you don't have that, it's simple enough to make them plain const std::string instead. Second, the way the characters are physically arranged makes it much simpler to visually verify that the characters are correct. Next, I'd recommend creating a private member function like this: std::string line(unsigned n) const; This constructs the top, middle or bottom line and returns a single string. Here's how I wrote it: std::string ConsoleTable::line(unsigned n) const { std::stringstream line; n *= 3; line << markers[linetype][2+n]; for (std::size_t i{0}; i < widths.size()-1; ++i) { for (std::size_t j{0}; j < (widths[i] + padsize + padsize); ++j) { line << markers[linetype][0]; } line << markers[linetype][3+n]; } for (std::size_t j{0}; j < (widths.back() + padsize + padsize); ++j) { line << markers[linetype][0]; } line << markers[linetype][4+n] << '\n'; return line.str(); } Here is how the private member data variables are declared: std::size_t padsize; Style linetype; bool innerlines; std::vector<std::string> header; std::vector<std::size_t> widths; std::vector<std::vector<std::string>> rows;
{ "domain": "codereview.stackexchange", "id": 30024, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, c++11, console", "url": null }
c# Exception ex = Assert.Throws<Exception>(() => chessBoard.GetAvailableMoves(startingSquare)); Assert.That(ex.Message == "Square must have a chess piece on it to get the available moves."); } #region utility methods private bool _IsDiagonalMove(Square sourceSquare, Square destSquare) { int destColumn = destSquare.column; int destRow = destSquare.row; int sourceColumn = sourceSquare.row; int sourceRow = sourceSquare.row; bool isDiagonalMove = Math.Abs(destColumn - sourceColumn) == Math.Abs(destRow - sourceRow); return isDiagonalMove; } private bool _IsVerticalMove(Square sourceSquare, Square destSquare) { int destColumn = destSquare.column; int destRow = destSquare.row; int sourceColumn = sourceSquare.row; int sourceRow = sourceSquare.row; bool isVerticalMove = sourceColumn == destColumn && sourceRow != destRow; return isVerticalMove; } private bool _IsHorizontalMove(Square sourceSquare, Square destSquare) { int destColumn = destSquare.column; int destRow = destSquare.row; int sourceColumn = sourceSquare.row; int sourceRow = sourceSquare.row; bool isHorizontalMove = sourceRow == destRow && sourceColumn != destColumn; return isHorizontalMove; } private bool _HasMovedOnlyOneSquare(Square sourceSquare, Square destSquare) { int destColumn = destSquare.column; int destRow = destSquare.row; int sourceColumn = sourceSquare.row; int sourceRow = sourceSquare.row;
{ "domain": "codereview.stackexchange", "id": 37437, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#", "url": null }
• 7 CATs FREE! If you earn 100 Forum Points Engage in the Beat The GMAT forums to earn 100 points for \$49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## The sum of the lengths of two pieces of rope is 65 feet. How long is the shorter piece? ##### This topic has expert replies Legendary Member Posts: 563 Joined: 01 Mar 2018 Followed by:1 members ### The sum of the lengths of two pieces of rope is 65 feet. How long is the shorter piece? by Gmat_mission » Wed Jun 24, 2020 8:00 am 00:00 A B C D E ## Global Stats The sum of the lengths of two pieces of rope is 65 feet. How long is the shorter piece? (1) The lengths of the pieces of rope are in the ratio 8:5. (2) One piece of rope is 15 feet longer than the other piece. [spoiler]OA=D[/spoiler] Source: GMAT Prep Legendary Member Posts: 1833 Joined: 02 Mar 2018 Followed by:3 members ### Re: The sum of the lengths of two pieces of rope is 65 feet. How long is the shorter piece? by deloitte247 » Sat Jun 27, 2020 1:47 am 00:00 A B C D E ## Global Stats Given that: x + y = 65 feet Target question => How long is the shorter price?
{ "domain": "beatthegmat.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9899864299677056, "lm_q1q2_score": 0.8023631145608311, "lm_q2_score": 0.8104789018037399, "openwebmath_perplexity": 2867.6304475012444, "openwebmath_score": 0.2340436577796936, "tags": null, "url": "https://www.beatthegmat.com/the-sum-of-the-lengths-of-two-pieces-of-rope-is-65-feet-how-long-is-the-shorter-piece-t314897.html" }
Problem #2: A box contains 18 tennis balls, 8 new 10 old. 3 balls are picked randomly and played with (so if any of them were new, they become 'old'), and returned to the box. If we pick 3 balls for the second time (after this condition), what is P that they are all new? I broke this down into 4 pieces: P(3 new second round|3 new first round)P(3 new first round) + P(3 new second round|2 new 1 old first round)P(2 new 1 old first round) + P(3 new second round|1 new 2 old first round)P(1 new 2 old first round) + P(3 new second round|3 old first round)(3 old first round). However, I was supposed to used binomials to count this. Instead I had a feeling that I should just multiply probabilities this way: \begin{align*} \frac{5\times4\times3}{18\times17\times16} &\times \frac{8\times7\times 6}{18\times 17\times 16} + \frac{6\times5\times 4}{18\times17\times16} \times \frac{8\times7\times10}{18\times17\times16}\\ &\quad + \frac{7\times6\times5}{18\times17\times16} \times \frac{8\times10\times9}{18\times17\times16} + \frac{8\times7\times6}{18\times17\times16} \times \frac{10\times9\times8}{18\times17\times16}. \end{align*} I get the correct answer with binomials, but this equation that I constructed undercounts the possibilities. Could you tell me what I am missing? ty! - Let's reduce problem 1 to see where you are going wrong. Let's say that there are 7 fishes, 4 trout and 3 carp, and you want to count how many ways there are of catching 2 fishes, at least one of them a carp.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9783846640860382, "lm_q1q2_score": 0.8044172426750339, "lm_q2_score": 0.8221891370573386, "openwebmath_perplexity": 622.0141665637818, "openwebmath_score": 0.8796431422233582, "tags": null, "url": "http://math.stackexchange.com/questions/67357/where-am-i-doing-it-wrong-trying-to-develop-intuition-for-probability?answertab=active" }
machine-learning Title: What was going on before PAC learning I am investigating PAC learning (computational learning theory) as a beginner with no previous knowledge of machine learning / AI. I am investigating the model mainly from a historical point of view. For this, the most important things are of course the results based on the model. There are enough papers out there that document these results. But I also want to write something about what was going on before PAC learning, as to sketch the historical context up to where Valiant came with the notion of the PAC model. No papers/surveys I've found so far document this, and as someone with no real knowledge of machine learning, it is hard to find this out. I am therefore asking this soft question here, because I believe there are enough experts that can help me with this. References are highly appreciated. When I can research and study what was going on before PAC, I might get a better appreciation as to why the academic world is so enthusiastic about the PAC model, which is also something interest to document in my historical work! References are highly appreciated. An author is expected to address the question of the context and relevance of his results at the begin of his publication. I just skimmed over the introduction of "L. Valiant. A theory of the learnable. Communications of the ACM, 27, 1984." again, and found out that Valiant indeed well covered your question. The original paper by Valiant is both freely available and not too difficult to read. (Except section 7, which only proves that the author can also tackle challenging mathematical problems, but doesn't contribute much to the real content of the paper.) Reading at least its introduction will be more rewarding than reading my overly long answer to this question, so I suggest to really try it. The rest of this answer tries to cite some passages from the introduction which should indicate whether reading this introduction might answer the question about the historical context. Note however that an author has the natural prerogative to be biased with respect to such questions. ... such a system would, at least, be a very good start. First, when one examines the most famous examples of systems that embody preprogrammed knowledge, namely, expert systems such as DENDRAL and MYCIN, essentially no logical notation beyond the propositional calculus is used.
{ "domain": "cs.stackexchange", "id": 1401, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning", "url": null }
python, android, primes, kivy from kivy import app from kivy.uix import boxlayout, button, popup, textinput from kivy.core.window import Window A minor point is that the class name would read much better as IsPrimeApp. Your get_font is misnamed; it should be get_font_size. The equation is also rather strange; it's infinitely large at size 10, negatively sized at size 11 and you end up very negative sizes to get small fonts. It's also advisable to use formatting over concatenation. I would suggest something like: def font_size(size): return "{}sp".format(avg(Window.size) / 32 * 1.5 ** size) I really suggest against wrapping lines like: number_input = textinput.TextInput( text=str(random.randint(5, 10 ** 10)), font_size=font_size(3)) This seems immensely more readable: number_input = textinput.TextInput( text=str(random.randint(5, 10 ** 10)), font_size=font_size(3) ) or any other semantic equivalent (take your pick). It's potentially better to just reorganize so you don't have to: default_input = str(random.randint(5, 10 ** 10)) number_input = textinput.TextInput(text=default_input, font_size=font_size(3)) Please don't go naming things parent or btn; give them semantically useful, even if more verbose names. parent → root btn → is_prime_btn callback → check_primality
{ "domain": "codereview.stackexchange", "id": 11194, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, android, primes, kivy", "url": null }
Suppose $z(t)=x(t)+iy(t)$, $a\leq t\leq b$, is an arc and $a=t_{0}. Suppose the subarc $z(t)$, $t\in[t_{j-1},t_{j}]$ is contained in a domain $D_{j}$, $j=1,\dots,n$. The function $f_{1}(z)$ on $D_{1}$ is said to be analytically continued along the path $z(t)$, $a\leq t\leq b$, if there is a chain $(f_{1},D_{1})$, $(f_{2},D_{2}),\dots,(f_{n},D_{n})$. Analytic continuation is a powerful aid in establishing transformations or functional equations for complex variables, because it enables the problem to be reduced to: (a) deriving the transformation (or functional equation) with real variables; followed by (b) finding the domain on which the transformed function is analytic. Schwarz Reflection Principle Let $C$ be a simple closed contour consisting of a segment $\mathit{AB}$ of the real axis and a contour in the upper half-plane joining the ends of $\mathit{AB}$. Also, let $f(z)$ be analytic within $C$, continuous within and on $C$, and real on $\mathit{AB}$. Then $f(z)$ can be continued analytically across $\mathit{AB}$ by reflection, that is, 1.10.5 $f(\overline{z})=\overline{f(z)}.$ ⓘ Symbols: $\overline{\NVar{z}}$: complex conjugate Permalink: http://dlmf.nist.gov/1.10.E5 Encodings: TeX, pMML, png See also: Annotations for 1.10(ii), 1.10(ii), 1.10 and 1 §1.10(iii) Laurent Series
{ "domain": "nist.gov", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.990140146442157, "lm_q1q2_score": 0.8024877007865266, "lm_q2_score": 0.8104789040926008, "openwebmath_perplexity": 780.6621733249648, "openwebmath_score": 0.9693211913108826, "tags": null, "url": "https://dlmf.nist.gov/1.10" }
c, snake-game, curses Avoid Implicit Casting The variable relSize is type float, the right hand side of the lines: int boundY = yMax/relSize; int boundX = xMax/relSize; will result in floats and that code is truncating the values of the right hand side. In some instances this might result in the wrong value being assigned to boundY and boundX. The standard C include file includes the functions round(), ceil() and floor() and are more appropriate that the simple truncation here. Using type double rather than type float will be more accurate. In any case it is much better to avoid implicit casts and make them explicit: int boundY = (int) (yMax/relSize); int boundX = (int) (xMax/relSize); Use exit() Very Carefully The use of the exit() function may bypass cleanup functions. In C++ you can avoid the exit() function by throwing and catching exceptions, in C there is an error handling capability using setjmp() and longjmp(). Generally setjmp() would be used in main() and longjmp would be used where error conditions happening. This stackoverflow.com question discusses setjmp() and longjmp(). The use of exit() should be avoided in programs such as operating systems that are never supposed to end. The printf() preceeding the call to exit() would be better as an fprintf(stderr, ERROR_MESSAGE);. The printf() function prints to stdout and may not appear in all cases, stderr should appear in all cases.
{ "domain": "codereview.stackexchange", "id": 25042, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c, snake-game, curses", "url": null }
algorithms, strings, search-algorithms, natural-language-processing It is a fairly simple matter to generate all anagrammings and weigh them, but there may be many, so I would prefer a method that finds light anagrammings directly.
{ "domain": "cs.stackexchange", "id": 8396, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, strings, search-algorithms, natural-language-processing", "url": null }
c++, performance, c++17, queue Declaring all your variables on one line like this is terrible practice in general. You should never do it. And in this case, it’s actually self-defeating. If you gave myUnion a default constructor that activated forConstexprCtor, and defined your member variables like this: myUnion theArray[N] = {}; Idxtype head = {}; Idxtype tail = {}; Idxtype theSize = {}; then your default constructor could be defaulted: constexpr circularQueue() noexcept = default; Next up is the copy constructor, and this (along with the move constructor, which you don’t have but should) is where the rubber really hits the road. When you are coping a circularQueue, none, some, or all of the elements in other will be present. You need to correctly handle all cases. You need to do this->theArray[i].value = other.theArray[i].value; for all elements that are present, and this->theArray[i].forConstexprCtor = {}; for all elements that are not. Figuring out how to do that correctly is the real trick of writing this type. As an aside… why is your copy constructor explicit? What do you think that is accomplishing? And I am completely baffled as to why you have a constructor that copies from a non-const circularQueue. Is this because the following template constructor swallowed the copy/move ops? If so, there is an easier fix. template<typename... Args> explicit constexpr circularQueue(Args&&... theList) : mS{(theList)...}, head{0}, tail{sizeof...(theList)}, theSize{sizeof...(theList)}{} I’m guessing the intention here is to be able to write code like: auto c = circularQueue<int, 4>{1, 2, 3, 4}; // c is a queue with 1,2,3,4 in it.
{ "domain": "codereview.stackexchange", "id": 38590, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, performance, c++17, queue", "url": null }
what is the standard error of the sample proportion What Is The Standard Error Of The Sample Proportion p repeatedly randomly drawn from a population and the proportion of successes in each sample standard error of proportion formula is recorded widehat p the distribution of the sample proportions i e p Sample Proportion Formula p the sampling distirbution can be approximated by a normal distribution given that both n p Standard Error Of P Hat p times p geq and n times -p geq This is known as theRule of Sample Proportions Note that some textbooks use a p Sample Proportion Calculator p minimum of instead of The mean 
{ "domain": "winaudit.org", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109534209825, "lm_q1q2_score": 0.8466062813019293, "lm_q2_score": 0.8596637541053281, "openwebmath_perplexity": 1203.1106506971157, "openwebmath_score": 0.9443531036376953, "tags": null, "url": "http://winaudit.org/guides/sample-proportion/standard-error-of-sample-proportion.html" }
astrophysics, astronomy Title: (Astrophysics) How to calculate photons detected by a radiometer over a period of 10 seconds? I know the Flux (calculated from flux density), and frequency but i dont think I have area. This is the question, it is part d and ive done all other parts: "Consider three widely separated frequencies: i) 5.5×1014 Hz ii) 4.8×1017 Hz iii) 5.0×107 Hz 1a. Convert frequency to wavelength for each of these frequencies, and state which part of the electromagnetic the frequencies are situated in. [1 mark] 1b. Calculate the energy of a single photon at each of these frequencies. Express your answer in electron-volts (eV) [1 mark]. 1c. Consider an astronomical source with a flux density on Earth of fν = 250 milliJansky (mJy), which is constant across all frequencies. We observe it with a radiometer (a light-measuring instrument) that has a uniform response across all frequencies. It operates by integrating the flux density over a range of frequencies that is 4 Gigahertz (GHz) wide, centered on a given frequency of interest. What is the flux picked up by the radiometer (in units of W m−2) within a band centered on each of the three frequencies?[2 marks] 1d. At each frequency, how many photons would be picked up by the radiometer, per unit area, in an exposure time of 10 seconds? What can you conclude about these frequency regimes in which the quantum nature of radiation could be important when doing astronomical observations? [2 marks]" I have seen somewhere the equation Number of photons = (F * A * t) / E Where, F is the flux picked up by the detector A is the effective area of the detector (in square meters), t is the exposure time (in seconds), and E is the energy of a single photon (in joules). Is this equation correct? Can I use it even though I dont have A? Is there a different equation I should be using? Can I use it even though I dont have A?
{ "domain": "physics.stackexchange", "id": 99755, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "astrophysics, astronomy", "url": null }
gazebo, mavros [ INFO] [1498053724.611294567]: udp1: Bind address: 127.0.0.1:14550 [ INFO] [1498053724.611363011]: udp1: Remote address: 127.0.0.1:14555 [ INFO] [1498053724.645294362]: Plugin 3dr_radio loaded [ INFO] [1498053724.647885644]: Plugin 3dr_radio initialized [ INFO] [1498053724.648076406]: Plugin actuator_control loaded [ INFO] [1498053724.650636182]: Plugin actuator_control initialized [ INFO] [1498053724.678331142]: Plugin adsb loaded [ INFO] [1498053724.681741307]: Plugin adsb initialized [ INFO] [1498053724.681937153]: Plugin altitude loaded [ INFO] [1498053724.682873856]: Plugin altitude initialized [ INFO] [1498053724.682966734]: Plugin cam_imu_sync loaded [ INFO] [1498053724.683511360]: Plugin cam_imu_sync initialized [ INFO] [1498053724.683650245]: Plugin command loaded [ INFO] [1498053724.687871461]: Plugin command initialized [ INFO] [1498053724.687904301]: Plugin distance_sensor blacklisted spawn_model script started [ INFO] [1498053724.688058060]: Plugin ftp loaded GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.04) 7.11.1 Copyright (C) 2016 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it.
{ "domain": "robotics.stackexchange", "id": 28170, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "gazebo, mavros", "url": null }
visible-light, sun, weather Title: Why are clouds lighter than the sky during the day but darker at night This is probably a very basic question but I couldn't find a good answer to it, most search results are about rain clouds or clouds appearing red at night (something I've never seen except for during sunset but apparently it's common in bigger cities). Basically what I'm wondering is why clouds during the day appear lighter than the sky (white vs light blue) while clouds at night and during the evening appear darker than the sky (see image). Image quality is low because I took it with my phone through my window. I guess the clouds could be blocking the light and therefore appear darker but in that case, shouldn't the same thing be happening during the day? There could be quite a few things going on. Off the bat there's no incoming light for them to scatter: during the day, clouds are white because the water droplets are big enough for all visible light to cause Mie scattering, but if you don't have much light falling on them, you can't observe the scattering and you can't observe light passing through either. Then you could consider the fact that in some places, it rains more in the evening/night than during the day (if you have hotter surface temperatures during the afternoon, you see cloud formation and precipitation during the late evening, and with the lower temperatures in the night, the air is more likely to become saturated, see Dew Point), and clouds which precede rain are thicker and denser. They don't allow much light pass through. And lastly, there's less ambient light which they can reflect back towards you.
{ "domain": "physics.stackexchange", "id": 52405, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "visible-light, sun, weather", "url": null }
electromagnetism, electrostatics Title: Why is the tangential component of an electric field continuous across the boundary between two media? I see the mathematical derivation for the fact that the tangential component of an electric field across two media is continuous, but I don't intuitively understand how this is the case. The electric field should either be impeded or not depending on the material, and this "impedance" for the electric field should affect tangential components as well. Perhaps, I am misunderstanding something, but I can't intuitively see how this happens. When you have two dielectric materials with homogeneous $\epsilon_1 , \epsilon_2$ each, applying an external electric field will only produce a bounded charge distribution only on the boundary - the interface between the media. So locally, you have a plane with charge distribution - thus creating an electric field that is perpendicular to the interface, meaning that the perpendicular electric field changed and has a discontinuity, but the parallel has no reason to change, meaning it is still continuous.
{ "domain": "physics.stackexchange", "id": 38434, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electromagnetism, electrostatics", "url": null }
ogre PKG_CONFIG_LIBDIR: /usr/lib/pkgconfig:/usr/local/Library/ENV/pkgconfig/10.8:/opt/X11/lib/pkgconfig:/opt/X11/share/pkgconfig ACLOCAL_PATH: /usr/local/share/aclocal:/opt/X11/share/aclocal OBJC: cc
{ "domain": "robotics.stackexchange", "id": 14302, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ogre", "url": null }
water, physical-chemistry, air, chemical-compounds Think: NASA did/does very different depending on situation! On a spacestation with ample electricity You can do electrolysis, for Apollo they made electricity from oxygen/hydrogen in fuel cells! And cost of upmass includes the cost of energy generators or storage batteries. But topic here is some (realistic) method for barsmonster to survive the Moscow summer. This is best done by cooling the air intake, demoist it (which would hopefully remove some of the organic "smog" particles) and one could try some PP microfibre filters. The exhaust air would be used to cool down the air taken in. Any separation of carbon dioxide by ad(ab)sorbents would be much more expensive than this air exchange.
{ "domain": "physics.stackexchange", "id": 1099, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "water, physical-chemistry, air, chemical-compounds", "url": null }
inorganic-chemistry, acid-base Title: Do polyfluorosulfuric acids like H2SO2F4 exist? I was thinking about a compound resembling $\ce{H2SO4}$ but more acidic. I thought to replace the 2 oxygens with 4 fluorine atoms getting $\ce{H2SO2F4}$, which might be more acidic. Does such a compound even exist? I also tried searching another compound i.e. $\ce{HOSF5}$ thinking of it to be more acidic than the previous but no results. As the comments imply, no dice. You can, of course, make a six-coordinate sulfur compound with the formula $\ce{SF6}$. But trying it with a mixture of oxide, hydroxide and fluoride ligands instead of just fluoride presents the opportunity to evolve $\ce{HF}$ leaving the sulfur with a lower coordination number. There is a five-coordinate compound $\ce{SOF4}$, but otherwise you should expect the sulfur to get down to four-coordination. So the only stable protic acids you can get with one sulfur atom and oxide, hydroxide and fluoride ligands are plain old $\ce{H2SO4}$ and the more strongly acidic $\ce{HSO3F}$.
{ "domain": "chemistry.stackexchange", "id": 10737, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "inorganic-chemistry, acid-base", "url": null }
c#, beginner, tic-tac-toe if (winFlag == false) // No one won --------------------------- { Console.WriteLine("It's a draw!"); Console.WriteLine("Score: {0} - {1} {2} - {3}", player1, score1, player2, score2); Console.WriteLine(""); Console.WriteLine("What would you like to do now?"); Console.WriteLine("1. Play again"); Console.WriteLine("2. Leave"); Console.WriteLine(""); while (correctInput == false) { Console.WriteLine("Enter your option: "); choice = int.Parse(Console.ReadLine()); if (choice > 0 && choice < 3) { correctInput = true; } } correctInput = false; // Reset ------------- switch (choice) { case 1: break; case 2: Console.Clear(); Console.WriteLine("Thanks for playing!"); Console.ReadLine(); playing = false; break; } } if (winFlag == true) // Someone won ----------------------------- { if(turn == 1) { score1++; Console.WriteLine("{0} wins!" , player1); Console.WriteLine("What would you like to do now?"); Console.WriteLine("1. Play again"); Console.WriteLine("2. Leave"); while (correctInput == false) { Console.WriteLine("Enter your option: "); choice = int.Parse(Console.ReadLine()); if (choice > 0 && choice < 3) { correctInput = true; } } correctInput = false; // Reset --------------
{ "domain": "codereview.stackexchange", "id": 26906, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, beginner, tic-tac-toe", "url": null }
electrostatics, energy, charge, conventions, si-units Title: Definition of an electron volt One electron volt is defined as the kinetic energy of a electron in a potential of $1$ volt. Hence, by conservation of energy, electric potential energy = kinetic energy: $$ q V = \mathrm{K.E.} $$ As the charge of an electron is $-e$ and the voltage is $1\mathrm{V}$, the kinetic energy would be $-e$ joules, which will be $-1.6 \times 10^{-19}$ joules. But Google says $1\mathrm{eV} = +1.6 \times 10^{-19}$ joules How is that possible? This is the definition of $1$ electron volt which I know. Am I right? Hence, by conservation of energy, electric potential energy = kinetic energy q x V = K.E This is not a correct expression for the conservation of energy. The conservation of energy would be $$E_{total}=E_{potential}+E_{kinetic}=const.$$ So, if we start with the electron at rest at 0 V then we get the total energy is 0 J and therefore $$E_{kinetic}=-E_{potential}$$ which resolves your concern
{ "domain": "physics.stackexchange", "id": 94354, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "electrostatics, energy, charge, conventions, si-units", "url": null }
### Liquidxlax because the direction of the gravity and the acceleration are not in the same direction. 3. Dec 6, 2011 ### Delphi51 It is easy to see the book answer. The seat has to push up with force mg to cancel weight. And it also has to push in with ma to provide the centripetal force. Combine those with the Pythagorean theorem to get its answer. 4. Dec 6, 2011 it is obviously true that the gravity and the acceleration are not acting in the same direction. However, my solution does not imply that they are acting in the same direction. I have put a figure for clarification. The forces acting in the radial direction are mgsinθ and n which are both point into the center File size: 8.4 KB Views: 145 5. Dec 6, 2011 ### Delphi51 For part (d), isn't θ = 0?
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.953966101527047, "lm_q1q2_score": 0.8127340967646585, "lm_q2_score": 0.8519528057272544, "openwebmath_perplexity": 823.2362684764907, "openwebmath_score": 0.5098291039466858, "tags": null, "url": "https://www.physicsforums.com/threads/normal-force-in-ferris-wheel.557625/" }
beginner, rust if first_condition { let ramanujan_number = a.pow(3) + b.pow(3); if ramanujan_number < arguments.n { ramanujan_numbers.push(ramanujan_number); } } else if second_condition { let ramanujan_number = a.pow(3) + c.pow(3); if ramanujan_number < arguments.n { ramanujan_numbers.push(ramanujan_number); } } else if third_condition { let ramanujan_number = a.pow(3) + d.pow(3); if ramanujan_number < arguments.n { ramanujan_numbers.push(ramanujan_number); } } } } } } ramanujan_numbers.sort(); match ramanujan_numbers.len() { 0 => println!( "No Ramanujan number smaller than {} was found.", arguments.n ), 1 => println!( "The Ramanujan number smaller than {} is {:?}.", arguments.n, ramanujan_numbers ), _ => println!( "The Ramanujan numbers smaller than {} are {:?}.", arguments.n, ramanujan_numbers ), } } Is there any way that I can improve my code? Coding Style Factor the Algorithm into Helper Functions Right now, you’ve got two levels of nested if, several with else blocks, nested inside five levels of loops. This could really benefit from some helper functions. Let’s say I’m giving you the code review and I say, “Oh, we have a function like this already. Maybe you could use that in your algorithm.” /* Finds all pairs (a,b) such that a**3 + b**3 = sum. */ fn taxicab_pairs(sum: usize) -> Vec<(usize, usize)>
{ "domain": "codereview.stackexchange", "id": 44840, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "beginner, rust", "url": null }
algorithms, randomized-algorithms, randomness Title: Can someone explain LazySelect? The LazySelect algorithm is given in these slides as follows. We have a set $S$ of $n = 2k$ distinct numbers and want to find the $k$th smallest element. Let $R$ be a set of $n^{3/4}$ elements chosen uniformly at random with replacement from $S$. Sort $R$ and find $a$ and $b$ such that $\mathrm{rank}_R(a) = kn^{-1/4} – sqrt(n)$ and $\mathrm{rank}_R(b) = kn^{-1/4} + sqrt(n)$, where $\mathrm{rank}_X(x) = t$ if $x$ is the $t$th smallest element in $X$. Compute $\mathrm{rank}_S(a)$ and $\mathrm{rank}_S(b)$: Output FAIL if $k < \mathrm{rank}_S(a)$ or $k > \mathrm{rank}_S(b)$. Let $P = \{i \in S\mid a\le y\le b\}$: Output FAIL if $|P| \ge 4n^{3/4}$. Return the $(k - \mathrm{rank}_S(a) + 1)$th smallest element from $P$. Can someone explain the intuition behind the algorithm in a way that is more detailed and easier to understand than the slides above? $\DeclareMathOperator{\rank}{rank}$Given a set $S$ of $n = 2k$ elements, the algorithm is aimed at finding the median of $S$ in linear time, with high probability. The idea is to find two elements $a,b \in S$ with $\rank(a) \leq k \leq \rank(b)$ and $\Delta = \rank(b) - \rank(a)$ as small as possible. Given such elements, we can find the median in time $O(n + \Delta\log\Delta)$ as follows:
{ "domain": "cs.stackexchange", "id": 2991, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "algorithms, randomized-algorithms, randomness", "url": null }
It can help a great deal to go from logic to English (or vice versa) in stages, via "Loglish" -- that unholy mixture of English and symbolism which we cheerfully use in the classroom! So .... There is someone $y$ such that $(\forall xF(x,y)\land(\forall z((\forall wF(w,z))\rightarrow y=z))$ There is someone $y$ such that (everyone $x$ is such that $x$ can fool $y$) and (everyone $z$ is such that ($(\forall wF(w,z))\rightarrow y=z))$ i.e. There is someone $y$ such that everyone can fool $y$ and everyone $z$ is such that (if everyone $w$ can fool $z$, then $z$ is the same person as $y$). i.e. There is someone $y$ such that everyone can fool $y$ and anyone whom everyone can fool is none other than $y$ again. i.e. There is someone whom everyone can fool, and no one other then he can be fooled by everyone. i.e. There is exactly one person whom everyone can fool. Read this from top to bottom to translate the formal wff into English. And now read the same sequence from bottom to top to translate in the other direction!! Taking things in stages like this helps a great deal when first learning to translate in either direction. There are lots more worked examples of this kind involving nested quantifiers in my Introduction to Formal Logic (Ch. 24), with more exercises and answers online. For practice quickly makes perfect: but it does take a bit of practice to make this all seem as easy as it really is. I recall Paul Teller's A Modern Formal Logic Primer is also quite good on translation (his book, now out of print, is freely available from his website). -
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9653811601648193, "lm_q1q2_score": 0.8335019907196668, "lm_q2_score": 0.8633916064586998, "openwebmath_perplexity": 237.75494046098788, "openwebmath_score": 0.5224370360374451, "tags": null, "url": "http://math.stackexchange.com/questions/499021/nested-quantifiers-in-english-translation/499097" }
simulations, quantum-computer, quantum-error-correction Title: What are the logical 0 and 1 states in the 9 qubit 'surface 17' code? I am trying to implement the 9 qubit 'surface 17' code, however it appeared to me that I couldn't find in the literature what the encoding states for such a physical system are. I have found in the paper Low-distance Surface Codes under Realistic Quantum Noise that one may use $\bar{X}=X_2X_4X_6$ and $\bar{Z}=Z_0Z_4Z_8$ as logical operator (qubits are numbered 0 to 8), however I don't quite understand why these should be chosen and how they are recovered. Isn't checking only 3/9 qubit more prone to errors ? To find the logical operators starting from a stabilizer set, you need the former to be stabilized by the latter. This happens iff all the stabilizers commute with the operators, that can be considered logical (as their application to any logical state is closed into the code scheme space). You can verify yourself on table II of the cited paper that the operators you mention commute with all the Z and X stabilizers. I.e. X operators have one common qubit with X stabilizers, while two common qubits (or zero) with Z stibilizers. The Z operators case is symmetrical.
{ "domain": "physics.stackexchange", "id": 96066, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "simulations, quantum-computer, quantum-error-correction", "url": null }
ros2, clion, ros-crystal Second, he has me create a top-level CMakeLists.txt file: add_subdirectory(src/ament/googletest/googlemock) add_subdirectory(src/ament/googletest/googletest) add_subdirectory(src/ament/uncrustify_vendor) add_subdirectory(src/eProsima/Fast-CDR) add_subdirectory(src/eProsima/Fast-CDR/src/cpp) add_subdirectory(src/eProsima/Fast-CDR/test) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/src) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/src/memory_tools) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/src/test_runner) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/test) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/test/cmake) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/test/memory_tools) add_subdirectory(src/osrf/osrf_testing_tools_cpp/osrf_testing_tools_cpp/test/test_runner) add_subdirectory(src/osrf/osrf_testing_tools_cpp/test_osrf_testing_tools_cpp)
{ "domain": "robotics.stackexchange", "id": 32769, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ros2, clion, ros-crystal", "url": null }
the-sun, the-moon, orbital-mechanics, time Title: How can I plot a rotary cam for the equation of time? I'd like to use Solidworks to create a cam (peanut looking) for computing the equation of time in a pocket watch. Would appreciate any point to the right direction or advice. Audemars Piguet EoT Following the video, the shape of the cam is a polar plot of the equation of time, with an offset to keep the values positive. You can use any of the trigonometric approximations to the equation of time to plot this, with your choice of offset (presumably a smaller offset makes more deeply indented cam that is hard to follow, but a larger offset makes a cog that is easy to follow, but heavier and takes up more space. So the exact shape is a practical and engineering decision. You can plot the polar curve with your favourite graph plotter. Here is a quick mockup with geogebra and a simple formula adapted from wikipedia, $$r = c -0.7659\sin(6.24+0.0172 d)+0.9863\sin(2( 6.24+0.0172 d)+3.5932)$$ In which $c$ is the offset to keep the radius positive and $d$ is days of the (Julian) year $0\le d<365.35$. How that shape is then used to adjust a clockwork mechanism is hinted at by the video, but well beyond my knowledge of clocks (and isn't an astronomical question)
{ "domain": "astronomy.stackexchange", "id": 7293, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "the-sun, the-moon, orbital-mechanics, time", "url": null }
quantum-mechanics, angular-momentum, quantum-spin I probably would understand what is meant by up/down spin if I knew how the spin of a particle is actually measured (I think i could handle a detailed and precise explanation, but a crude explanation will suffice if it gives insight as to 1. why spin is up or down and 2. why it can come off as left and right when measured). Is up down just a name given by physicists to two different types of spins? Or does it have something to do with the actual directions? Related What is spin as related to particles Your confusion probably arises not from the technical details of spin measurement, but from the peculiar nature of quantum mechanics. The spin state of an electron can be arbitrarily aligned, so there are infinite possible spin states, not just up and down. But all these states live in a 2-dimensional vector space, and up and down states are one set of basis vectors of this space. In other words, any spin state may be written as a linear combination of up and down states (or left and right states). Designating up and down states as the basis is analogous to choosing a coordinate system; they are arbitrary and do not establish a preferential orientation in space. Another peculiar thing about quantum physics is the measurement induced "collapse" of the quantum state. Whatever the initial orientation, if you measure spin along the z-axis, the outcome can only be up and down, with a certain probability. Now since a left state tilts neither upward or downward, it is a natural possibility that each outcome is 50%.
{ "domain": "physics.stackexchange", "id": 11238, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, angular-momentum, quantum-spin", "url": null }
thermodynamics, general-relativity, quantum-field-theory, cosmology, entropy Note that this statement is within the mathematical model of classical general relativity and mechanics. which can explain how in a cyclic universe the entropy gets reset. He notes that the second law is not violated but transcended. I think the expression "transcended" is used to cover the fact that gravity is not quantized definitively yet, and the model of an evaporating black hole needs quantization of gravity since the way the black hole evaporates uses a phenomenology of an effective quantized gravity. IMO it is a hand waving term. This is under research , see for example here .
{ "domain": "physics.stackexchange", "id": 90034, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "thermodynamics, general-relativity, quantum-field-theory, cosmology, entropy", "url": null }
) ` point of no return '' in the previous section generalizes the of! Opinion ; back them up with references or personal experience curtains on a cutout like this (.! Rss reader cabinet on this wall safely be used for the group is nonabelian ( i.e Chernobyl Series that in! The left inverse then g is a group Gis the number of its elements algebra and came left. Right inverseof \ ( AN= I_n\ ), then \ ( M\ ) is called a right iff... Of no return '' in the study of partial symmetries or responding to answers!$, then find a left inverse for athe equality ar= 1 holds 'm the! An invertible n by n symmetric matrix, so ( ATA−1 AT =A I Concert f,... On our website with respect to e, then \ ( N\ ) is called right! $Z$ is surjective but not surjective, while $g$ is injective not... Therefore, by the Axiom Choice, there exists a Choice function $h is. 25Th Amendment still be invoked monoid with left inverse right reasons ) people make inappropriate remarks. T is a left inverse and the right inverse of service, privacy policy and cookie policy do you an. C, which serve as inverses to a left inverse and right inverses and conclude. Design / logo © 2021 Stack Exchange a clear definition for the binary operation ( Note that$:! Contexts ; for example, find the inverse of f ( X ) =3x+2 b 3, ). Trump himself order the National Guard to clear out protesters ( who sided with him ) on Capitol... Way, since ris a right inverseof \ ( N\ ) is called a right inverse iff has... Sided with him ) on the Capitol on Jan 6 is lower than system/alternator voltage so pleasant: $... I_N\ ), then \ ( A\ ) X \to Y$ let... Answer ”, you agree to our terms of service, privacy policy and cookie policy of! Inverses and we conclude that every element of $X$ the Axiom Choice, there exists a Choice $... When an Eb instrument plays the Concert f scale, what Note do start! Inverse, even if the group is nonabelian ( i.e is
{ "domain": "hourofscampering.com", "id": null, "lm_label": "1. YES\n2. YES\n\n", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9449947086083138, "lm_q1q2_score": 0.8013250411067637, "lm_q2_score": 0.8479677545357569, "openwebmath_perplexity": 750.6360408365724, "openwebmath_score": 0.7259578704833984, "tags": null, "url": "http://hourofscampering.com/lok-tbsbbz/da4113-left-inverse-in-a-group" }
Substitute: . $4\int u\,du$ . . . etc. $\int \frac{e^{t}\cos(e^t)\,dt}{3+5\sin(e^t)}$ Let $u \:=\:3+5\sin(e^t)\quad\Rightarrow\quad du \:=\:5e^t\cos(e^t)\,dt \quad\Rightarrow\quad e^t\cos(e^t)\,dt \:=\:\frac{1}{5}\,du$ Substitute: . $\int\frac{\frac{1}{5}\,du}{u} \;=\;\frac{1}{5}\int \frac{du}{u}$ . . . etc. $\int^{\frac{4}{5}}_0 \frac{\sin^{-1}\!\left(\frac{5}{4}x\right)}{\sqrt{16-25x^2}}\,dx$ The denominator is: . $\sqrt{16\left(1 - \frac{25}{16}x^2\right)} \;=\;4\sqrt{1 - \left(\frac{5}{4}x\right)^2}$
{ "domain": "mathhelpforum.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9848109503004294, "lm_q1q2_score": 0.8608008277530769, "lm_q2_score": 0.874077230244524, "openwebmath_perplexity": 10399.127820408063, "openwebmath_score": 0.9726709127426147, "tags": null, "url": "http://mathhelpforum.com/calculus/41982-find-indicated-integrals.html" }
oxidation-state A good example used to illustrate this is acetone. Determine the OS of all the elements in acetone, $\ce{C3H6O}$. The answer is in the spoiler below (no peeking ಠ_ಠ): $\mathrm{OS}_\ce{O} = -2, \mathrm{OS}_\ce{C} = \frac {-4}3, \mathrm{OS}_\ce{H} = +1$ There's one oxygen in acetone, and it's bonded to the carbon: Assigning the rules, we get $\ce{C1}$ has four valence electrons, but there are "seven electrons around it". Both electrons from the $\ce{C-H}$ bond belong to carbon. Thus, $4 - 7 = -3$. $\ce{C2}$ has four valence electrons, but only two electrons are around it. One from of each of the two $\ce{C-C}$ bonds. That makes $4 - 2 = +2$. $\ce{C3}$ is just like $\ce{C1}$, $\mathrm{OS} = -3$.
{ "domain": "chemistry.stackexchange", "id": 7614, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "oxidation-state", "url": null }
quantum-mechanics, electromagnetic-radiation, wavelength, wave-particle-duality Title: Does Wave-Particle Duality Mean "Particles" are Just Waves With Short Wavelengths? I have the following question about wave-particle duality: Are particles really just waves with short wavelengths? If this is correct, would it then be accurate to say: "everything in the universe is a wave, but when a wavelength is short, it acts like our macroscopic conception of a particle. However, on a quantum level, everything is really just a wave" For years, I have thought about it like I stated above and it makes perfect sense to me. Indeed, the de-Broglie relation $$\text{wavelength} = \frac{h}{mv}$$ shows that all matter exhibits wave like properties seems to confirm my understanding that they are "really" just waves with short wavelengths. But I ask the question because I hear quotes like "we don't know if things are particles or waves" and "our brains can't comprehend it", etc. I want to make sure I am not missing something. The following quote also seems to justify the interpretation I have given above: "If the distance between wave peaks is much smaller than the size of an object, the object will block the waves. But if the distance between wave peaks is much larger than the size an object, the waves will go around the object."
{ "domain": "physics.stackexchange", "id": 82416, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-mechanics, electromagnetic-radiation, wavelength, wave-particle-duality", "url": null }
homework-and-exercises, photons, pair-production Title: Atom emitts a photon - what is the frequency of the energy of the photon? Lets say I have an atom of known mass who goes from excited state to the ground state whose energy is $14.4 keV$ lower. I know that the emitted energy $14.4keV$ got converted into an energy of a photon and kinetic energy of the Atom - this means energy of the photon $E_\gamma < 14.4keV$. I tried to calculate it like this and got a nonsense: \begin{align} E_{1} &= E_{2}\\ \sqrt{ {E_{0~Fe}}^2 + {p_1}^2c^2 } &=\sqrt{ \left(E_{0~Fe} - E_\gamma\right)^2 + {p_2}^2 c^2 } \longleftarrow \substack{\scriptsize \boxed{p_1 = 0}~\boxed{p_2 = E_\gamma /c}}\\ \sqrt{ {E_{0~Fe}}^2 + 0 } &= \sqrt{ \left(E_{0~Fe} - E_\gamma\right)^2 + \frac{{E_\gamma}^2}{c^2} c^2 } \\ E_{0~Fe} &= \sqrt{ \left(E_{0~Fe} - E_\gamma \right)^2 + {E_\gamma}^2 }\\ {E_{0~Fe}}^2 &= {E_{0~Fe}}^2 - 2E_{0~Fe}E_\gamma +{E_\gamma}^2 + {E_\gamma}^2 \\ {E_\gamma}^2 + (-E_{0~Fe})E_\gamma + 0 &= 0\\ &\Downarrow\\ E_\gamma &= 0\\ E_\gamma &= E_{0~Fe} \end{align} None of the solutions to the quadratic equation make sense. Can anyone give me a hint where did i go wrong?
{ "domain": "physics.stackexchange", "id": 8705, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, photons, pair-production", "url": null }
everyday-chemistry, polymers A more complete description of what makes chewing gum sticky states: Terence Cosgrove, Professor of Chemistry at the University of Bristol and Chief Scientific Officer of Revolymer, explains that the chemical bonds between the molecules in a polymer-based chewing gum make it difficult to remove from a surface. When you pull a piece of used gum off a surface, most of the energy goes into stretching the polymer bonds in the gum, rather than actually breaking the bonds between the surface and the gum. Since polymers are long chains of covalently bonded molecules that give the gum its elasticity and "chewiness," the attractive forces between the atoms are formed by the sharing of electrons [8]. Thus, the bonds between the repeating units tend to remain intact as they lengthen and contract from an applied external force. The temperature at which the polymer base is exposed to the air also affects both the elasticity and adhesiveness of the chewing gum. When a piece of gum is heated by saliva and deformed by the grinding of human teeth, its polymer chains align in the direction of these forces. The degree of alignment is a function of the magnitude of applied stress, which explains why the gum becomes tougher and less elastic the longer or more rigorously you chew [8]. After the gum is removed from your mouth and placed in a cooler environment, the drop in temperature causes the orientation of the polymer chain to freeze, resulting in a hardened piece of used gum. Most commercial polymer gum bases are hydrophobic (water-insoluble), which is the reason they stick easily to oily surfaces and are difficult to remove, even with the help of household cleaning solutions. A hydrophobic substance tends to repel polarized molecules like water and attract non-polar compounds, such as the grease and grime on streets. A hydrophilic substance, on the other hand, behaves in the opposite manner by attracting water and repelling fats and oils [8]. A compound that exhibits both hydrophobic and hydrophilic traits is known as "amphiphilic" [9]. These substances, which attract both water and oil to some degree, can be used to create synthetic polymers that could help eliminate the sticky mess of chewing gum pollution.
{ "domain": "chemistry.stackexchange", "id": 3795, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-chemistry, polymers", "url": null }
c#, asynchronous, wpf, socket, tcp return true; } async Task<string> Receive(int bufferSize) { var buffer = new byte[bufferSize]; // Create a CancellationTokenSource to timeout the receiveasync using var tokenSource = new CancellationTokenSource(timeout); try { await client.ReceiveAsync(buffer, SocketFlags.None, tokenSource.Token); var response = Encoding.ASCII.GetString(buffer, 0, buffer.Length); Debug("Response Received: " + response); return response; } catch (TaskCanceledException) { Debug("Receive timeout"); return null; } catch (Exception ex) { Debug("Receive Error: " + ex.Message); return null; } } Now the code for each event that happens in the code is it's own set of code and logging. Now makes the main inner loop look similar to while (wifi_state) { string[] frame = cmd_Queue.Take(); // string[] { IP, Command } Socket client = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); try { if (await Connect(frame[0], port)) { var data = Encoding.ASCII.GetBytes(frame[1]); if (await Send(data)) { var response = await Receive(client.ReceiveBufferSize); } } } catch (SocketException ex) { Debug("Comms Error: " + ex.Message); } client.Close(); Warning I didn't run/test this code as I don't have all your code or a server setup to send and receive messages This is just used as an example of how using the TAP and restructuring the code into smaller functions/methods will help make it read easier and will be more up-to-date and I personally feel easier to maintain by anyone coming afterwards.
{ "domain": "codereview.stackexchange", "id": 42655, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c#, asynchronous, wpf, socket, tcp", "url": null }
c++, console, snake-game int main() { SnakeGame snake; snake.run(); } Brute force draw of the screen takes 900 micro seconds. Note: that's a grid of 20 * 20 so a small screen. Will look at what an optimized draw looks like this weekend (where I only update head and tail of the snake every drawFrame()` Missing #includes The code you posted doesn't compile without adding additional #include statements, for example cmath for the math functions, <algorithm> for std::find(), and so on. Make Direction an enum class Make it a habit to always use enum class instead of enum, unless you really need the latter. An enum class adds extra type-safety. Naming things What's a SLocation? Secure location? Static location? Snake location? If you avoid abbreviating names, others won't have to guess. Even the latter is not really appropriate, as it's not a single location. Maybe BodyLocations is better? That it's the body of a snake is clear from the context. Avoid specifying time resolution unless really necessary As I also mentioned in the review of part 2 of your game engine, you are dealing with explicit milliseconds way too early. Try to keep durations in unspecified std::chrono::duration variables for as long as possible. Of course, here you need to pass in the initial step time. So do it like this: class SnakeGame: public ThorsAnvil::GameEngine::Game { using Clock = std::chrono::steady_clock; using Duration = Clock::duration; Duration stepTime = std::chrono::milliseconds(500); … virtual Duration gameStepTime() override { return stepTime; } … void handleLogic() override { … if (snake.head() == cherry) { stepTime *= speedInceaseFactor; … } } … };
{ "domain": "codereview.stackexchange", "id": 45526, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "c++, console, snake-game", "url": null }
newtonian-mechanics if a force is at an oblique angle to the surface, there will be a component that is parallel to the surface and normal to it as forces can be resolved into two components. However if it is a frictionless surface, you say it will not exist. Why not? You said in case of a frictionless surface, the only reaction is the normal force. In this case, the vector sum can always be resolved into one component, normal to the surface. I cannot gain intuition regarding this I have added two more diagrams to show an oblique applied force $F$, and labeled all the diagrams Figures 1-4. Figure 3 is for a frictionless surface. It shows an oblique force $F$ applied to the mass $M$. The force $F$ is resolved into its x- and y- force components as shown. The y- component of the force adds to the downward weight of the mass and equals the normal reaction at the surface, for a net force of zero and no vertical acceleration. There is no friction force so there is no reaction force at the surface. The reaction force to the x- component of the applied force is directed at the source of the applied force (source not shown). This satisfies Newton's 3rd law. But since there is no friction opposing the x- component of the force applied to the mass $M$, there is a net force in the x- direction to the left and per Newton's 2nd law and the block $M$ accelerates to the left. Figure 4 shows the same situation as Figure 3, but now there is a static friction force opposes and equals the x- component of the applied force $F$ as long as $F_x$ does not exceed the maximum static friction force of $μ_{s}Mg$. So there is now a net force of zero in the x direction and the block does not accelerate. Note, however, that there is still the action-reaction pair of forces between $M$ and the source of $F$ per Newton's 3rd law. That doesn't matter if there is friction or not. I'm afraid I don't have any more time to put into this, so I hope this finally helps.
{ "domain": "physics.stackexchange", "id": 62114, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "newtonian-mechanics", "url": null }
9. Jan 13, 2011 ### msas Re: After further algebra Absolutely brilliant derivation! I'd really be interested how did you get all this done. A pointer to a book would suffice, no need to type the whole thing. The reason I'm asking is: I'm dealing with same functions, with only difference being that $$a,b,A,B$$ are all functions (we could say polynomials) of x. I'm more comfortable talking in terms of time, cause I'm into digital signal processing, but just as well. Could I just switch constants A,B with A(t),B(t)? For the previous eq: $$A\cos(a) + B\cos(b) = (A+B)\cos(x)(1-z\cos(y))$$ it is trivial to see that A,B can be swapped with functions A(t),B(t), so it's looking good so far. However I'm not sure if all the magic you did, that results in that aractan etc is also "immune" to such a swap. Please let me know if you can. I'm googleing like crazy with no luck so far. Thank you in any case! Sash 10. Jan 13, 2011 ### msas Re: After further algebra Absolutely brilliant derivation! I'd really be interested how did you get all this done. A pointer to a book would suffice, no need to type the whole thing. The reason I'm asking is: I'm dealing with same functions, with only difference being that $$a,b,A,B$$ are all functions (we could say polynomials) of x. I'm more comfortable talking in terms of time, cause I'm into digital signal processing, but just as well. Could I just switch constants A,B with A(t),B(t)?
{ "domain": "physicsforums.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9820137910906878, "lm_q1q2_score": 0.8028694065009396, "lm_q2_score": 0.817574471748733, "openwebmath_perplexity": 641.5491739145629, "openwebmath_score": 0.682317852973938, "tags": null, "url": "https://www.physicsforums.com/threads/beating-with-signals-of-different-amplitude.201577/" }
sql, sql-server, stackexchange You could do this SELECT * FROM Posts WHERE CreationDate < DATEADD( MONTH, -1, @startDate ); This lets the cardinality estimator use the available statistics on the CreationDate column without confusing it with the function. If you're grouping by a value that doesn't actually add a new level of granularity (Users.DisplayName doesn't actually change the grouping) it can be more efficient to use an aggregate there; you get the same result, but a cheaper sort. SELECT MAX( u.DisplayName ) -- I'm cheaper than grouping on me Playing around with the APPLY operator can be fun as well; I've often seen it perform better than a NOT EXISTS, like so (with a GoodAnswers.Id IS NULL in the WHERE clause to get an exclusive outer apply). Note - this one wasn't tested, so YMMV. APPLY can be pretty situational, but I always have fun writing them. OUTER APPLY ( SELECT TOP( 1 ) Id FROM Posts OldAnswers WHERE OldAnswers.ParentId = Questions.Id AND OldAnswers.CreationDate < @startDate AND OldAnswers.Score > 0 ORDER BY ( SELECT NULL ) ) GoodAnswers Overall, I came up with something like this. DECLARE @startDate date; SET @startDate = CONVERT( datetime, '2019-07-21' );
{ "domain": "codereview.stackexchange", "id": 35674, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "sql, sql-server, stackexchange", "url": null }
\documentclass{article} \usepackage{amsmath,mathtools} \begin{document} For a quadratic with determinant $\Delta$ where \begin{gather} \Delta = b^2 - 4ac, \\ \shortintertext{we have the general solution} x = \dfrac{-b \pm \sqrt{\Delta}}{2a}. \end{gather} This has the two roots $x'$ and $x''$ given by \begin{align} x' & = \dfrac{-b + \sqrt{\Delta}}{2a} \\ \shortintertext{and} x'' & = \dfrac{-b - \sqrt{\Delta}}{2a}. \end{align} \end{document} As an aside, consider using x^+ and x^- instead of the prime notation, as it makes things clearer, as people will think you're talking about derivatives.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9381240090865197, "lm_q1q2_score": 0.8064711973488332, "lm_q2_score": 0.8596637433190939, "openwebmath_perplexity": 1664.1501278214446, "openwebmath_score": 0.6853571534156799, "tags": null, "url": "https://tex.stackexchange.com/questions/553334/how-to-separate-quadratic-formula-between-delta-and-x-and-x/553338" }
complexity-theory Title: $PSPACE$ not equal $DSPACE(2^n) $ It seems pretty obvious that $PSPACE$ is not equal to $DSPACE(2^n) $. Can this be shown using the space hierarchy theorem? Is that the most simple and straight-forward way? It is tempting to use the following argument: For every $k$, the space hierarchy theorem shows that there is some problem in $\mathrm{DSPACE}(2^n)$ which is not in $\mathrm{DSPACE}(n^k)$ (since $n^k = o(2^n)$), hence $\mathrm{PSPACE} \neq \mathrm{DSPACE}(2^n)$. Unfortunately, the same argument shows that $\mathrm{PSPACE} \neq \mathrm{PSPACE}$, since for every $k$ there is a problem in $\mathrm{DSPACE}(n^{k+1}) \subset \mathrm{PSPACE}$ which is not in $\mathrm{DSPACE}(n^k)$ (since $n^k = o(n^{k+1})$). What went wrong? Hopefully this will be clear later on. How do we fix this argument? Take a (space-constructible) function $f(n)$ such that $f(n) = o(2^n)$. For every $k$, $n^k = O(f(n))$. For example, we could take $f(n) = 1.5^n$. The space hierarchy theorem gives a language in $\mathrm{DSPACE}(2^n)$ which is not in $\mathrm{DSPACE}(f(n))$. This language is not in $\mathrm{DSPACE}(n^k)$ for any $k$, hence it is not in $\mathrm{PSPACE}$. Notice that in contrast to the preceding argument, here the same language works for every $k$; there we got a different language for each $k$.
{ "domain": "cs.stackexchange", "id": 14063, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "complexity-theory", "url": null }
python, pygame, mathematics class Block(object): def __init__(self,x,y,sprite): self.x=x self.y=y self.sprite=sprite self.rect=self.sprite.get_rect(x=self.x,y=self.y) top=(random.randint(5,8)*32) cen=(top+random.randint(4,6)*32) down=15 across=0 blklvl=0 while across<3200: while down>0: screen.fill((0,0,0)) if blklvl==top: blocksel=grass instancelist.append(Block(across,blklvl,blocksel)) if blklvl>top: if blklvl<cen: blocksel=dirt instancelist.append(Block(across,blklvl,blocksel)) if blklvl>cen-1: blocksel=stone instancelist.append(Block(across,blklvl,blocksel)) down=down-1 blklvl=blklvl+32 if down==0: if across<3200: per=(across/(32/5)) if per>100: per=100 top=(random.randint(5,8)*32) cen=(top+random.randint(4,6)*32) down=15 blklvl=0 across=across+32 down=15 #print 'GENERATION:'+str(per)+'%' pygame.display.flip() players.append(Player(640/2,20)) blocksel=dirt #mainloop while True: #block select key=pygame.key.get_pressed() if key[K_1]: blocksel=grass if key[K_2]: blocksel=dirt if key[K_3]: blocksel=stone
{ "domain": "codereview.stackexchange", "id": 4727, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "python, pygame, mathematics", "url": null }
filters, filter-design, finite-impulse-response, digital-filters How to find the transfer function if it is this kind of form? This kind of representation confuses me lot. How to determine the direct form and lattice form and their coefficients for such FIR filter? Q: How to find the transfer function if it is this kind of form? Assumption: the zeros are located at $z_{1,2} = r(\omega_0)e^{j\omega_0}$ where $r(\omega_0) = 1-b+b\cos\omega_0$. We can generically write the z-transform domain representation of a 2nd order FIR filter as: $$h(z) = k(z-z_1)(z-z_2) = k(z^2-(z_2+z_1)z+z_1z_2)$$ where $k$ is an arbitrary scale factor and $z_{1,2}$ are the locations of the zeros. Converting to the frequency domain by letting $z \to e^{j\omega}$: $$H(\omega) = k(e^{j2\omega} - (z_2+z_1)e^{j\omega} +z_1z_2)$$ Now your choice of $z_{1,2}$ can be substituted in. How to determine the direct form and lattice form and their coefficients for such FIR filter? The direct forms pretty much given by the expression for $h(z)$ above. To convert to a lattice representation, follow the steps below (based on Lattice-Structure for FIR filters). This notation is somewhat different than above, but matches that in the reference more closely. Let $y(n) = x(n) + \alpha_2(1)x(n-1) + \alpha_2(2)x(n-1)$
{ "domain": "dsp.stackexchange", "id": 10886, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "filters, filter-design, finite-impulse-response, digital-filters", "url": null }
ds.algorithms You can post other greedy strategies and prove (or disprove!) them by employing (maybe combining them) classic proof strategies such as mathematical induction, contradiction, and exchange argument. Edit: Actually, I don't quite understand why my answer (instead of that of @Marzio De Biasi) was accepted. Maybe the OP only want to know whether his/her greedy strategy is correct or not. Whatever, please refer to the answer of @Marzio De Biasi (and also other ones) for a complete solution.
{ "domain": "cstheory.stackexchange", "id": 2981, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "ds.algorithms", "url": null }
computability, semi-decidability Consider the statement: "For every infinite $L \in \mathcal{R}$ there is an infinite $D \in \mathcal{D}$ such that $D \subseteq L$." You are asking whether we can prove the statement by giving an effective procedure which assigns to every $L$ a corresponding $D$ in such a way that $D$ does not depend on the choice of the code of $L$. (Clearly, we can do this inefectively by an application of the axiom of choice.) More precisely, the question seems to be whether there a computable map $f$ such that
{ "domain": "cs.stackexchange", "id": 5762, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "computability, semi-decidability", "url": null }
quantum-field-theory, quantum-optics $$\langle \psi |a^{\dagger}_{S_m}a_{S_m}| \psi \rangle = \sum_{n=0}^{\infty}\sqrt{\frac{N_S^{2n}}{(N_S+1)^{2n+2}}}(n) = \frac{1}{N_S+1}\sum_{n=0}^{\infty}n\left( \frac{N_S}{N_S+1} \right)^n$$ Which I do not know how to evaluate. Am I on the right track here? Let $a_S$ and $a_I$ denote the annihilation operators on the $S$ and $I$ parts of the state respectively. We define hermitian operators $a_{S_m}$ and $a_{I_m}$ for $m=1,2$ by means of the equations $$a_S = a_{S_1}+ia_{S_2},\quad a_{I}=a_{I_1}+ia_{I_2}\tag{1}$$ Here we focus on the $S$ part. Taking the adjoint of the first equation in (1) we get $$a_S^\dagger=a_{S_1}-i a_{S_2}\tag{2}$$ By summing and subtracting (1) and (2) we may invert the relation to get $$a_{S_1}=\frac{a_S+a_S^\dagger}{2},\quad a_{S_2}=\frac{a_S-a_S^\dagger}{2i}\tag{3}$$ Hence employing commutation relations $[a_S,a_S^\dagger]=1$ we get the squares
{ "domain": "physics.stackexchange", "id": 55744, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "quantum-field-theory, quantum-optics", "url": null }
everyday-chemistry Title: Why does paper crumble on getting wet? Why does paper crumble on getting wet? This is something I have noticed hundreds of times but cannot think of an explanation. Does it have anything with water disturbing the intermolecular forces between cellulose micro fibrils in paper or something else? Also, one peculiar observation is that it does not crumble immediately but only after the water evaporates? Why? Polymer properties in water often relate to the relative strength of polymer-polymer interactions versus polymer-water interactions. For example, the solubility of amylose > amylopectin > cellulose is attributed to how strongly molecules of each of these polymers interact with water molecules compared to other molecules of themselves. In the paper making process, the cellulose fibers of paper dry out and interactions with water tend to get replaced by interactions with nearby polymer molecules. The resulting structure is strong because of polymer-polymer interactions as well as purely physical effects (e.g., "matting" of fibers, and surprisingly strong surface-surface interactions), all of which is enhanced by the compression and tension forces placed on the paper during its manufacture (google it to see photos). When paper is later made wet, the entrance of water eventually disrupts these interactions, leading to local regions of what you might think of as micro solubility. Then, when the paper dries out again as you describe, it probably does so without any compression/tension forces on the paper, thereby reducing the extent to which the matting of the fibers and chemical interactions between polymers takes place, and so it crumbles because of those micro regions of "solubility". (Disclaimer: The above is just rationalizing off the top of my head. But since it is based on expertise as a PhD Chemist for nearly three decades, I'd give it a 70+% chance of being accurate ;)
{ "domain": "chemistry.stackexchange", "id": 16609, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "everyday-chemistry", "url": null }
homework-and-exercises, radiation, sun, units For simplicity, I would suggest always using trig functions that accept arguments in radians. Then when you have a value in degrees, you can convert as necessary. In $$\Omega = \sin\biggl(\frac{\pi\theta}{180}\biggr)\sin(\delta) + \cos\biggl(\frac{\pi\theta}{180}\biggr)\cos(\delta)\cos(\omega)$$ the formula is written with the conversion factors from degrees to radians built in. In other words, as written, it expects $\theta$ to be in degrees, but it assumes you are using radian mode (i.e. trig functions that take arguments in radians). $\theta$ needs to be converted from degrees to radians, hence the factors of $\frac{\pi}{180}$, but $\delta$ and $\omega$ are assumed to already be in radians. In $$\delta = \frac{23.45\pi}{180}\cos\biggl(\frac{2\pi}{365}(172-J)\biggr)$$ you will notice that the factor of $\frac{\pi}{180}$ is already there to convert degrees to radians. But what it is converting is not the output of the cosine, which is a pure number (not an angle); rather, it is the $23.45^\circ$, which is the tilt of the Earth's axis relative to its orbital plane.
{ "domain": "physics.stackexchange", "id": 39867, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "homework-and-exercises, radiation, sun, units", "url": null }
correlation coefficient Calculator coefficient Significance Calculator ( \rho\ ) measure the! Is derived, it calculates the sample correlation \ ( r\ ) a! 2 variables paid statistics packages, with the ability to save and share data ability to and... Association between variables of interest based on the method of covariance you can the... And insight into every step of calculation pearson 's correlation coefficient Significance Calculator can paste the copied. And standard deviations of two sets of data the test statistics that about..., with the ability to save and share data on the method of covariance step of calculation coefficient Calculator Calculator. Calculates the sample correlation between 2 variables the method of covariance Significance Calculator for the... The strength and direction of the relationship a free online tool that the! Calculator is a measure of the linear relationship between two continuous variables \rho\ ) is one of relationship. Describes the magnitude of the linear relationship between two variables us the stepwise procedure and insight into every step calculation! A measure of the relationship between different variables coefficient ( r ) is used for measuring the linear between. Of correlation coefficient Calculator for the given set of paired data the measures the association between two continuous variables correlation... … pearson correlation coefficient Calculator direction of the relationship between two variables in set! Free alternative to Minitab and other paid statistics packages, with the ability to save and share data the between! Between two variables different from zero to Minitab and other paid statistics,... Procedure and insight into every step of calculation the strength and direction of association. Paid statistics packages, with the ability to save and share data procedure and insight into every step calculation! Coefficient of any two sets of data values the data copied from …... For measuring the linear relationship between different variables data values evaluates the.! Free online tool that displays the correlation coefficient helps you determine the relationship coefficient: it is one the. And standard deviations of two variables \ ( \rho\ ) for the given set of data strength and of... Tool that displays the correlation coefficient Significance Calculator coefficient of any two of... Result of correlation coefficient helps you determine the relationship between 2 variables, with ability... A set of data significantly different from zero one of the strength and direction of the relationship. From a … pearson correlation coefficient and an correlation
{ "domain": "calcigarro.cat", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9814534316905263, "lm_q1q2_score": 0.8135648360147236, "lm_q2_score": 0.8289388062084421, "openwebmath_perplexity": 840.1354149351382, "openwebmath_score": 0.6115067005157471, "tags": null, "url": "http://booking.calcigarro.cat/restaurants-in-decuvlq/1e8d7d-correlation-coefficient-calculator" }
methods, when solving the linear constant-coe cient hyperbolic equations. EXAMPLE-1 Below a MATLAB program to implement the fourth-order Runge-Kutta method to solve y' 3 e t 0. These methods were developed around 1900 by the German mathematicians C. \$\endgroup\$ - Smith Johnson Dec 4 '11 at 20:38. BDF integrator uses diagonal implicit Runge Kutta starter The BDF routine can deal with fully implicit index 1 DAE’s: ∀t ∈ [0,T] : F(˙y(t),y(t),u(t),p,T) = 0. I know that I need to reduce the equation into two first order ODEs, however I am unsure of how to properly proceed after this stage. The Runge-Kutta 2nd order method is a numerical technique used to solve an ordinary differential equation of the form ( ) ( ) 0 0 , , y y y x f dx dy = = Only first order ordinary differential equations can be solved by using the Runge-Kutta 2nd order method. Below is the formula used to compute next value y n+1 from previous value y n. 15) will have the same order of accuracy as the Taylor's method in (9. The difference is that in each step, instead of using just f( P á, U á), higher-order explicit Runge-Kutta methods take a. We then present fifth- and sixth-order methods requiring fewer derivative function evaluations per time step than fifth- and sixth-order Runge–Kutta methods applicable to nonlinear problems. 0994 ∈ t % 48. Enter initial value of x i. Here is my problem:. In essence, the Runge-Kutta method can be seen as multiple applications of Euler's method at intermediate values, namely between and. Appendix A Runge-Kutta Methods The Runge-Kutta methods are an important family of iterative methods for the ap-proximationof solutions of ODE's, that were develovedaround 1900 by the german mathematicians C. Real systems are often characterized by multiple functions simultaneously. 4 Runge-Kutta Methods Motivation: Obtain high-order accuracy of Taylor’s method without knowledge of derivatives of ( ). I want to know how to program a code that will solve the ODE using
{ "domain": "freccezena.it", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.9873750529474512, "lm_q1q2_score": 0.8072526373313896, "lm_q2_score": 0.817574471748733, "openwebmath_perplexity": 695.2215890655395, "openwebmath_score": 0.7012003660202026, "tags": null, "url": "http://gwqt.freccezena.it/runge-kutta-2nd-order-method-solved-examples.html" }
machine-learning, python, predictive-modeling, data-science-model, model-selection However the model's accuracy(Test accuracy) turned out to be 93% while the baseline is 94.3%. The training accuracy is 99%. Compared to test accuracy 94.3%, I don't think there's a over-fitting problem The logistic regression also has the same problem. Based on correlation blot, most independent variables have pretty weak relationship with target variable smaller than +/- 0.3. what should I do next to improve my model accuracy? I tried parameter tuning but it doesn't help a lot. This is a common problem with rare events modelling, and your options are relatively limited (as far as I am aware, at least). It may well be the case that the features you're using are not very informative with respect to predicting these outcomes. The major issue is that your predictors, in the context of this model, are not very informative. The model tries to balance false positives and false negatives, but with so few true positives any mistakenly-predicted positive outcome will have a large effect on your classification accuracy. It seems likely in this case that your predictors do not offer enough information to predict outcomes well. You may have reached the ceiling of what this model can do. This could be an artifact of the rarity of the "hired" outcome in your data set, or it could simply be that the relationship between these predictors and the outcome is weak. There are a few options, involving use of different techniques (like a Firth regression, designed for rare events modelling). But using different predictors may be the best option, if it is possible to do so. Not every event can be modelled well with some arbitrary set of features, and it may be that you've found one of those.
{ "domain": "datascience.stackexchange", "id": 5242, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "machine-learning, python, predictive-modeling, data-science-model, model-selection", "url": null }
pressure, earth, air, moon Title: What changes about a helium-filled balloon on the surface of the moon? In terms of air pressure, I think that the pressure inside the balloon should be equal to the air pressure outside so that it does not burst. So how will a helium-filled balloon behave on the moon in comparison to earth? because here, the balloon does not burst unless it has been hit with enough force to burst it. This means that the air pressure inside must be equal to air pressure outside? Will this change because there is no pressure on the moon? You don't really need to have an equal pressure inside and outside of the balloon as long as the balloon can withstand some pressure gradient. Without any data on balloon resistance to pressure it's hard to predict exactly what will happen, but I think it's a good guess to say that if you inflate an helium balloon on the Moon you will need a really small amount of gas in comparison to Earth to create the correct pressure gradient. Moreover, since there is no atmosphere to give buoyancy to the helium balloon, it will not float, but rather drop to the floor as any other object.
{ "domain": "physics.stackexchange", "id": 35278, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "pressure, earth, air, moon", "url": null }
# Probability of winning a game where you sample an increasing sequence from a uniform distribution This is an interview question I got and could not solve. Consider a two-person game where A and B take turns sampling from a uniform distribution $$U[0, 1]$$. The game continues as long as they get a continuously increasing sequence. If, at any point, a player gets a number less than the last number (the largest number so far), that player loses. A goes first. What is the probability of A winning? For example, if the sequence is $$0.1 (A), 0.15 (B), 0.2 (A), 0.25 (B), 0.12 (A)$$, then A loses.
{ "domain": "stackexchange.com", "id": null, "lm_label": "1. YES\n2. YES", "lm_name": "Qwen/Qwen-72B", "lm_q1_score": 0.973240718366854, "lm_q1q2_score": 0.8252767559244527, "lm_q2_score": 0.847967764140929, "openwebmath_perplexity": 224.0842121903894, "openwebmath_score": 0.9219028353691101, "tags": null, "url": "https://stats.stackexchange.com/questions/550847/probability-of-winning-a-game-where-you-sample-an-increasing-sequence-from-a-uni" }
orbit, orbital-mechanics, coordinate, saturn Kronocentric distance, a (R_sat): 1.862330120959813 a (km): 112238.91173000602 altitude (km): 51970.91173000602 R_cyl: 32361.60942773073 lat_centric: 54.79195278661741 lat_graphic: 60.13756294902166 Verification: Using Cosmographia to generate a view of Saturn's horizon at $\phi$g = 60.1°, you can see just into the B-ring:
{ "domain": "astronomy.stackexchange", "id": 6089, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "orbit, orbital-mechanics, coordinate, saturn", "url": null }
rosinstall Title: Is $ROS_PACKAGE_PATH after source install okay? After I did a source install to /media/apps/ros, as suggested in this question, is my $ROS_PACKAGE_PATH, shown below, okay? lucid@lucid-desktop:~$ echo $ROS_PACKAGE_PATH
{ "domain": "robotics.stackexchange", "id": 8119, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "rosinstall", "url": null }
hilbert-space, conformal-field-theory, ground-state Title: Ground state in radial quantization -- Why isn't $\phi(0) |0\rangle = |0 \rangle$? I am trying to reconcile two perspectives on the ground state defined through the path integral. In Tom Hartman's gravity lectures (http://www.hartmanhep.net/topics2015/gravity-lectures.pdf) he says that any state $| Y \rangle$ evolved for a long enough euclidean time is projected onto the ground state, since $$| Y \rangle = \sum_n y_n |n \rangle \rightarrow \mathrm{e}^{-\tau E_n}y_0 | 0\rangle$$ in the limit $\tau \rightarrow \infty$. Hence, whatever boundary data $\phi_1, \, \phi_2$ you take, we have $$\lim_{\tau \rightarrow \infty}\langle \phi_2 | \mathrm{e}^{- \tau H} | \phi_1 \rangle = \lim_{\tau \rightarrow \infty} \int_{\phi(-\tau/2)=\phi_1}^{\phi(\tau/2)=\phi_2}\mathcal{D}\phi\,\mathrm{e}^{-S_E[\phi]} = \langle0 | 0\rangle$$ up to some normalization.
{ "domain": "physics.stackexchange", "id": 87044, "lm_label": null, "lm_name": null, "lm_q1_score": null, "lm_q1q2_score": null, "lm_q2_score": null, "openwebmath_perplexity": null, "openwebmath_score": null, "tags": "hilbert-space, conformal-field-theory, ground-state", "url": null }