anchor
stringlengths
0
150
positive
stringlengths
0
96k
source
dict
What is "Containment order"? Could I get a friendly explanation?
Question: Reading the Wikipedia page was a little dense for me. Can anyone offer a friendly explanation of what this means? It sounds like it saying that if you have two ordered collections, if they are "ordered by containment" then they will be in roughly the same order with each other? Answer: I'll try a down to earth definition \ example: When speaking of a set $X$, lets say $X=\{1,...,100\}$, denote the subset $X_i = \{1,...,i\}$ It holds that $X_1\subseteq X_2\subseteq...\subseteq X_{100}$ So you could say there is a containment order over these subsets, because each subset contains the other in an order that I stated. This order came from, and connected to (and isomorphic to) the partial order of $1 \leq 2 \leq ... \leq 100$ or if you prefer more fomally, that $(X,\leq )$ is a poset.
{ "domain": "cs.stackexchange", "id": 10516, "tags": "terminology" }
Pointers and Dereference Operator
Question: *a = *b In C, the above expression is always interpreted as the assignment of the value at the address contained in b to the value at an address in a. But, according to the precedence of operators and associativity rule, * has higher priority than the assignment operator '=', and is associative from right to left. If we interpret the given expression keeping in mind these rules, it should be interpreted as: *a=*b (*a)=(*b) 10=20(Supposing *a=10 & *b=20) So, this must result in an error if taken as per the rules of precedence and also as per the rule that there can't be an expression on the left-hand side of an assignment operator. Then why does this work? Answer: I'm not sure if this is in topic here. Anyway, (*a) in (*a)=(*b) denotes a lvalue referring to the object in a. On the right side of the assignment, (*b) also denotes an lvalue, but this lvalue undergoes lvalue conversion and is converted to the actual value stored in the object pointed by $b$ (i.e., to 20). In the end, the value 20 is assigned to the object pointed by a. See, for example, this discussion about lvalues and rvalues.
{ "domain": "cs.stackexchange", "id": 18654, "tags": "c" }
What unit system does Fahrenheit belong to?
Question: Wikipedia's page for Imperial Units does not list Fahrenheit. The corresponding page for SI Units lists Kelvin as an SI unit, and Celsius as a derived SI unit. This leads me to believe that Fahrenheit does not belong to the Imperial System. Fahrenheit is listed as a page belonging to the Imperial Units category, also on Wikipedia. I know that the US mostly uses the Imperial system and also uses Fahrenheit for temperature. Meanwhile, the UK mostly uses the metric system and also uses Celsius for temperature. Is this just a coincidence, or is Fahrenheit actually a part of the Imperial system? Answer: The Weights and Measures Act (the origin of the Imperial Units) does not speak of temperature. It was intended to create a uniform system for trade. You don't sell temperature, in the way you sell a pint of milk or a yard of cloth. And frankly, when it was first conceived (before Magna Carta, which already stated: "There shall be but one Measure throughout the Realm" "One measure of Wine shall be through our Realm, and one measure of Ale, and one measure of Corn, that is to say, the Quarter of London; and one breadth of dyed Cloth, Russets, and Haberjects, that is to say, two Yards within the lists. (2) and it shall be of Weights as it is of Measures." nobody had a sense of temperature - let alone a reliable way to measure it. This is why Fahrenheit is an "orphan" measure, and not always considered part of the Imperial system. It didn't come into being until 1724 - about 500 years after Magna Carta.
{ "domain": "physics.stackexchange", "id": 21237, "tags": "soft-question, temperature, terminology, units, metrology" }
How should I define the action space for a card game like Magic: The Gathering?
Question: I'm trying to learn about reinforcement learning techniques. I have little background in machine learning from university, but never more than using a CNN on the MNIST database. My first project was to use reinforcement learning on tic-tac-toe and that went well. In the process, I thought about creating an AI that can play a card game like Magic: The Gathering, Yu-gi-oh, etc. However, I need to think of a way to define an action space. Not only are there thousands of combinations of cards possible in a single deck, but we also have to worry about the various types of decks the machine is playing and playing against. Although I know this is probably way too advanced for a beginner, I find attempting a project like this, challenging and stimulating. So, I looked into several different approaches for defining an action space. But I don't think this example falls into a continuous action space, or one in which I could remove actions when they are not relevant. I found this post on this stack exchange that seems to be asking the same question. However, the answer I found didn't seem to solve any of my problems. Wouldn't defining the action space as another level of game states just mask the exact same problem? My main question boils down to: Is there an easy/preferred way to make an action space for a game as complex as Magic? Or, is there another technique (other than RL) that I have yet to see that is better used here? Answer: There are several different ways you can model the state and action spaces in such sequential (extensive-form) environments/games. For environments with small action spaces or those typically introduced to beginning-RL students, the state space and action space remains constant along an agent's trajectory (termed normal form games when there are multiple agents). In sequential games which can be illustrated as trees, a "state" is analogous to "information set" which is defined as the sequence (tuple) of actions and observations since the beginning of the game's episode. Terminal states (leaf nodes) exist, and the action space $\mathcal{A}[x]$ at an information set $x$ can be defined as a union of action sequences that can be taken to each terminal state, not counting terminal states that cannot be reached from the current information set. In the above examples, I discussed games such as the examples you stated where more than one agent can interact with the environment, but this is a generalization over RL and can be applied to when only one agent is maximizing its reward.
{ "domain": "ai.stackexchange", "id": 2510, "tags": "reinforcement-learning, ai-design, action-spaces" }
Maximum number of points that two paths can reach
Question: Suppose we are given a list of $n$ points, whose $x$ and $y$ coordinates are all non-negative. Suppose also that there are no duplicate points. We can only go from point $(x_i, y_i)$ to point $(x_j, y_j)$ if $x_i \le x_j$ and $y_i \le y_j$. The question is: given these $n$ points, what is the maximum number of points that we can reach if we are allowed to draw two paths that connect points using the above rule? Paths must start from the origin and may contain repeated points. $(0, 0)$ is of course not included in the points reached. An example: given $(2, 0), (2, 1), (1, 2), (0, 3), (1, 3), (2, 3), (3, 3), (2, 4), (1, 5), (1, 6)$, the answer is $8$ since we can take $(0, 0) \rightarrow (2, 0) \rightarrow (2, 1) \rightarrow (2, 3) \rightarrow (2, 4)$ and $(0, 0) \rightarrow (1, 2) \rightarrow (1, 3) \rightarrow (1, 5) \rightarrow (1, 6)$. If we are allowed to draw only one path, I can easily solve the question by dynamic programming that runs in $O(n^2)$. I first sort the points by decreasing $x_i+y_i$. Let $D[i]$ be the maximum number of coins that one can pick up from coins $1$ to $i$ in the sorted list. Then $D[1] = 1$ and $D[i] = \max\limits_{1\le j < i, x_j \le x_i, y_j \le y_i} D[j] + 1$. The answer then is just $\max\limits_{1\le i \le n} D[i] + 1$. But I cannot come up with a recurrence relation for two paths. If anyone has any idea about such a recurrence relation, I would be happy to hear what they are. Answer: The problem, restated and generalized: given a finite set $S$ equipped with a partial order $\le$, find chains $C_1, C_2 \subseteq S$ maximizing $\lvert C_1 \cup C_2 \rvert$. The question is about the case where $S \subseteq \mathbb R_+^2$ and $(x, y) \le (z, w) \Longleftrightarrow x \le z \wedge y \le w$. Naively, one might try to find the single best chain in $S^2$, where best is measured by how many distinct values the components of the chain have. Unfortunately, one component can retrace the steps of the other, e.g., $$\bigl((0,0),(0,0)\bigr) < \bigl((1,0),(0,0)\bigr) < \bigl((2,0),(0,0)\bigr) < \bigl((2,0),(1,0)\bigr),$$ so this notion of best does not have optimal substructure. Instead, we look for chains in the set $T := \{(x, y) \mid (x, y) \in S^2 \wedge x \nless y \wedge y \nless x\}$. By requiring that the components be equal or incomparable, we prevent retracing but now need to argue that some best chain conforms to the new requirement. Lemma 1 (no retracing). Let $C \subseteq T$ be a chain and define $C_1 := \{x \mid (x, y) \in C\}$ and $C_2 := \{y \mid (x, y) \in C\}$. For all $z \in S$, we have $z \in C_1 \cap C_2$ if and only if $(z, z) \in C$. Proof. The if direction is trivial. In the only if direction, for all $z \in C_1 \cap C_2$, there exist $x, y \in S$ such that $(x, z), (z, y) \in C$. Since $C$ is a chain, $(x, z) \le (z, y) \vee (z, y) \le (x, z)$. Assume symmetrically that $(x, z) \le (z, y)$, which implies that $x \le z \le y$. We know by the definition of $T$ that $x \nless z \wedge z \nless y$, so $x = z = y$, and $(z, z) \in C$. Lemma 2 (existence of restricted best chain). For all chains $C_1, C_2 \subseteq S$, there exists a chain $C \subseteq T$ such that $C_1 \subseteq \{x \mid (x, y) \in C\} \subseteq C_1 \cup C_2$ and $C_2 \subseteq \{y \mid (x, y) \in C\} \subseteq C_1 \cup C_2$. Proof (revised). We give an algorithm to construct $C$. For convenience, define sentinels $\bot, \top$ such that $\bot < x < \top$ for all $x \in S$. Let $C_1' := C_1 \cup \{\top\}$ and $C_2' := C_2 \cup \{\top\}$. Initialize $C := \varnothing$ and $x := \bot$ and $y := \bot$. An invariant is that $x \nless y \wedge y \nless x$. Let $x'$ be the next element of $C_1$, that is, $x' := \inf \{z \mid z \in C_1' \wedge x < z\}$. Let $y'$ be the next element of $C_2$, that is, $y' := \inf \{w \mid w \in C_2' \wedge y < w\}$. If $x' \nless y' \wedge y' \nless x'$, set $(x, y) := (x', y')$ and go to step 9. If $y < x' < y'$, set $(x, y) := (x', x')$ and go to step 9. If $y \nless x' < y'$, set $x := x'$ and go to step 9. Note that $x < x' \wedge x \nless y$ implies that $x' \nless y$. If $x < y' < x'$, set $(x, y) := (y', y')$ and go to step 9. If $x \nless y' < x'$, set $y := y'$ and go to step 9. Note that $y < y' \wedge y \nless x$ implies that $y' \nless x$. This step is never reached, as the conditions for steps 3–7 are exhaustive. If $x \ne \top$ (equivalently, $y \ne \top$), set $C := C \cup \{(x, y)\}$ and go to step 2. Dynamic Program. For all $(x, y) \in T$, compute $$ D[x, y] := \sup\biggl(\Bigl\{D[z, w] + [x \ne z] + [y \ne w] - [x = y] \mathrel{\Bigl|\Bigr.} (z, w) \in T \wedge (z, w) < (x, y)\Bigr\} \cup \bigl\{2 - [x = y]\bigr\}\biggr), $$ where $[\textit{condition}] = 1$ if $\textit{condition}$ is true and $[\textit{condition}] = 0$ if $\textit{condition}$ is false. By Lemma 1, it follows that the bracket expressions correctly count the number of new elements. By Lemma 2, the optimal solution to the original problem is found.
{ "domain": "cs.stackexchange", "id": 318, "tags": "computational-geometry, dynamic-programming, recurrence-relation" }
Is the Bellman backup unbiased?
Question: This is comes from cs285 2023Fall hw3. In my opinion, if $\hat{Q}$ is unbiased estimate of $Q$, then $$ \begin{align} \mathbb{E}_{D \sim P}[B_{D}\hat{Q} - B_{D}Q] &= \mathbb{E}_{D \sim P}[r(s,a) + \gamma max_{a'}\hat{Q}(s', a') - r(s,a) - \gamma max_{a'}Q(s', a')]\\ &=\mathbb{E}_{D \sim P}[\gamma max_{a'}\hat{Q}(s', a')- \gamma max_{a'}Q(s', a')]\\ &= 0 \end{align}$$ So $\hat{Q}$ is unbiased estimate of $Q$ and the answer is yes. Is this right? Answer: Given your provided definition of unbiased estimator at each input data, and since $\hat{Q}$ is an unbiased estimator for the true action value $Q$ at each state-action input pair, applying Bellman backup operator defined in your reference involving a max operator would result in biased $B\hat{Q}$ not converging to $BQ$ in expectation at each state-action input pair. The distribution of the maximum of $n$ independent i.i.d. random variables doesn't necessarily follow the same distribution, it may follow Gumbel distribution for example, which may not have same expectation as the original distribution, as pointed out by another answer. Of course in reality TD off-policy Q-learning cannot ensure there always exists such an unbiased $\hat{Q}$ estimator at each state-action input.
{ "domain": "ai.stackexchange", "id": 4216, "tags": "reinforcement-learning, proofs, temporal-difference-methods, bias" }
How can I understand this statement about RNNs and hidden layers?
Question: In the lecture, there was a statement: Recurrent neural networks with multiple hidden layers are just a special case that has some of the hidden to hidden connections missing. I understand recurrent means that can have connections to the previous layer and the same layer as well. Is there a visualization available to easily understand the above statement? Answer: I assume the statement was made for Elman recurrent neural networks, because as far as I know, that is the only type of neural networks for which that statement is valid. Let's say we have an Elman recurrent neural network with one input neuron, one output neuron and one hidden layer with two neurons. In total there are 10 connections. As the image shows, neuron A receives the combined previous output of both neuron A and B as input. The same goes for neuron B. This is not the case when we split the neurons up into multiple layers; the context neuron(s) are only used by neurons that are in the same layer. Let say we now use multiple hidden layers and keep the amount of neurons the same. In total there are 7 connections now (image below). That is 3 less than in the first example, which has only one hidden layer. So which connections do we miss? That is shown in the bottom image. (I had to paste these two images together in one image because my reputation only allows me to post 2 links) Please note the cross; the connection between neuron A and B is not there in the first image, because it would be some kind of random recurrent connection. The first and the last image are exactly the same. I think that if you compare the first and the last image that you agree that the statement is true.
{ "domain": "ai.stackexchange", "id": 120, "tags": "recurrent-neural-networks, terminology, hidden-layers" }
Find number of possibilities
Question: We have a column which has following format : (1, 1, 1, 0, 1). Here is the definition of what means two columns are compatible. Using the notation Oi to denote the collection of rows possessing a 1 in their i-th element, we conclude that one of three possibilities must be true: Oi ⊆ Oj , Oj ⊆ Oi , or Oi and Oj are disjoint (the first two cases include the possibility that Oi = Oj). If columns i and j satisfy this condition, then we call them compatible The question is : How many possible columns of length 5 are compatible with the column (1, 1, 1, 0, 1)? You can read more here where problem is described in page 14-16. Thanks in advance ! Answer: Let $n$ be the number of rows (in your case $n=5$) and $|O_j|$ the cardinality of $O_i$. We can calculate the number of compatible columns as sum of the number of $O_i \subseteq O_j$, $O_j \subseteq O_i$ and $O_i, O_j$ disjoint. number of $O_i \subseteq O_j$: Here we need to find the number of supersets of our $O_i$. That means we need to find the number of sets which contain the indices in $O_i$ and for each index not in $O_i$, we have two choices: It is contained, or it is not contained in $O_j$. Therefore the number of supersets is $2^{n-|O_i|}$, which is 2 to the power of elements not in our set. number of $O_i, O_j$ disjoint: Here all elements of $O_i$ are fixed not to be in $O_j$ and for each index not in $O_i$, it can again be in $O_j$ or not. That means that the number of possible sets is again $2^{n-|O_i|}$. number of $O_j \subseteq O_i$: Here we need to find the number of subsets of $O_i$ and this is the number of elements in its power set, which is $2^{|O_i|}$. So finally, this means that the number of compatible columns can be calculated as: $2^{n-|O_i|} + 2^{n-|O_i|} + 2^{|O_i|} - 2$. Why -2? We have counted 2 sets twice: The empty set, which is a subset of $O_i$ and also counted in the number of disjoint sets and $O_i$, which we counted in $O_i \subseteq O_j$ and in $O_j \subseteq O_i$ This means for your examle, we have $16 + 2 + 2 - 2 = 18$ compatible sets. Note: The all zero vector (which corresponds to $O_i=\{\}$) is a special case for which the formula above does not work, as all disjunct sets and all supersets are counted twice. For this special case, the number of compatible columns is simply $2^n$.
{ "domain": "bioinformatics.stackexchange", "id": 368, "tags": "gene, genome, homework" }
VBA - If value in sheet1 found in sheet2, then delete data from sheet2
Question: I have 2 sheets setup: Exclusions and Issues Issues has a list of CASE ID's and Columns that list the "Issue" Exclusions will be populated with CASE ID's that are to be excluded (and removed) from the Issues sheet. My question is 2 fold: Is my current code handling this correctly? Are there any ways to improve this? Is there a way to have the code cycle through all columns dynamically? Or is it just easier to copy the FOR/NEXT loop for each column on the "Issues" sheet? Code below: Sub Exclusions() 'find exclusions and remove from issues sheet. once done delete any completely blank row Dim i As Long Dim k As Long Dim lastrow As Long Dim lastrowex As Long Dim DeleteRow As Long Dim rng As Range On Error Resume Next Sheets("Issues").ShowAllData Sheets("Exclusions").ShowAllData On Error GoTo 0 Application.ScreenUpdating = False lastrowex = Sheets("Exclusions").Cells(Rows.Count, "J").End(xlUp).Row With ThisWorkbook lastrow = Sheets("Issues").Cells(Rows.Count, "A").End(xlUp).Row For k = 2 To lastrowex For i = 2 To lastrow If Sheets("Exclusions").Cells(k, 10).Value <> "" Then If Sheets("Exclusions").Cells(k, 10).Value = Sheets("Issues").Cells(i, 1).Value Then Sheets("Issues").Cells(i, 11).ClearContents End If End If Next i Next k End With On Error Resume Next Sheets("Issues").Activate For Each rng In Range("B2:P" & lastrow).Columns rng.SpecialCells(xlCellTypeBlanks).EntireRow.Delete Next rng Application.ScreenUpdating = True End Sub Data Format: "Issues" sheet CASE ID Issue 1 Issue 2 Issue 3 ABC123 No address No Name No Number "Exclusions" sheet Issue 1 Issue 2 Issue 3 ABC123 DEF123 ABC123 Answer: My example you'll find below is based on most often working with large datasets and opts for speed in data handling. You didn't state the size of your Issues and Exclusions, so I worked with a large dataset in mind. A couple quick things to get out of the way because these are good practices to make into consistent habits: Always use Option Explicit Avoid a "wall of declarations", plus the very useful other tips on that site Establish specific object variables for the worksheets, instead of always using Sheets. Further, by only using Sheets you're implying that the code should operate on the currently ActiveWorksheet. This is quite often correct, but will trip you up at some point when you intend something different. So I make a habit of defining exactly which workbook and worksheet I'm using by initializing variables with fully qualified references. Dim exclusionsWS As Worksheet Dim issuesWS As Worksheet Set exclusionsWS = ThisWorkbook.Sheets("Exclusions") Set issuesWS = ThisWorkbook.Sheets("Issues") While I understand your rationale for handling the possible ShowAllData errors, I would much rather be clear about "why" you need to do this. So I'd avoid the On Error Resume Next by making it clear I'm checking for a possible AutoFilter: With exclusionsWS If (.AutoFilterMode And .FilterMode) Or .FilterMode Then .AutoFilter.ShowAllData End If End With With issuesWS If (.AutoFilterMode And .FilterMode) Or .FilterMode Then .AutoFilter.ShowAllData End If End With Next, because there may be a large dataset, I would copy the data on the worksheet into a memory-based array. Working out of memory is MUCH faster than working with the Range object in Excel. Later, the process of checking to see if a value exists in another dataset is perfect for a Dictionary. So we'll loop through all the exclusions and create a dictionary item for each entry. Dim exclusionData As Variant exclusionData = exclusionsWS.UsedRange Dim exclusion As Dictionary Set exclusion = New Dictionary Dim i As Long For i = 2 To lastRow If Not exclusionData(i, 10) = vbNullString Then exclusion.Add exclusionData(i, 10), i End If Next i After that, my example shows checking each Issue against the Dictionary and clearing out any excluded Issues. In order to copy the remaining issues back to the worksheet, we have to clear ALL the issues first, then copy the array data to the worksheet. Here's the whole routine in a single view: Option Explicit Public Sub RemoveExclusions() Dim exclusionsWS As Worksheet Dim issuesWS As Worksheet Set exclusionsWS = ThisWorkbook.Sheets("Exclusions") Set issuesWS = ThisWorkbook.Sheets("Issues") With exclusionsWS If (.AutoFilterMode And .FilterMode) Or .FilterMode Then .AutoFilter.ShowAllData End If End With With issuesWS If (.AutoFilterMode And .FilterMode) Or .FilterMode Then .AutoFilter.ShowAllData End If End With Dim lastRow As Long With exclusionsWS lastRow = .Cells(.Rows.Count, "J").End(xlUp).Row End With '--- move the exclusion data to a memory-based array ' for processing into a dictionary Dim exclusionData As Variant exclusionData = exclusionsWS.UsedRange Dim exclusion As Dictionary Set exclusion = New Dictionary Dim i As Long For i = 2 To lastRow If Not exclusionData(i, 10) = vbNullString Then exclusion.Add exclusionData(i, 10), i End If Next i '--- move all the issues into a memory-based array also ' and clear the data from exclusion matches Dim issuesData As Variant Dim excludedCount As Long issuesData = issuesWS.UsedRange For i = 2 To UBound(issuesData, 1) If exclusion.Exists(issuesData(i, 10)) Then issuesData(i, 10) = vbNullString excludedCount = excludedCount + 1 End If Next i '--- now collapse all the empty rows by copying the remaining ' issues into a new array, then copy the array back to the ' worksheet Dim remainingIssues As Variant ReDim remainingIssues(1 To UBound(issuesData, 1) - excludedCount, _ 1 To UBound(issuesData, 2)) Dim newIssue As Long newIssue = 1 Dim j As Long For i = 1 To UBound(issuesData, 1) If Not issuesData(i, 10) = vbNullString Then For j = 1 To UBound(issuesData, 2) remainingIssues(newIssue, j) = issuesData(i, j) Next j newIssue = newIssue + 1 End If Next i issuesWS.UsedRange.ClearContents issuesWS.Range("A1").Resize(UBound(remainingIssues, 1), _ UBound(remainingIssues, 2)) = remainingIssues End Sub
{ "domain": "codereview.stackexchange", "id": 34237, "tags": "performance, beginner, vba, excel" }
Can someone intuitively explain the reason for the units of entropy (J/K )?
Question: If entropy is just a measure of the "randomness" or "chaos" of a system, then why does it have specific units. What is the reason behind the units of entropy being J/K? Answer: The units of of energy over temperature (e.g. J/K) used for entropy in the thermodynamic definition follow from a historical association with heat transfer under temperature gradients, in other words, the definitions of temperature and entropy are intertwined, with entropy being the more fundamental property. Entropy can be thought of as a potential and temperature (or its inverse, rather) as a generalized force associated with displacements along energy dimensions in the entropy potential. The (inverse) temperature is a measure of the effect of changes in the amount of energy on the entropy of the system, as the following definition suggests: $$\frac{1}{T}=\left(\frac{\partial S}{\partial U}\right)_V$$ However if you start from a statistical mechanical definition of the entropy, $$S=k_\mathrm{B}\log\Omega$$ then it is easy to see that a unitless definition might be just as well suited: $$S_{\mathrm{unitless}}=\log\Omega$$ However, in the absence of the Boltzmann constant you need some other way to relate entropy and energy (e.g. heat transfer) in our conventional system of units. You can of course subsume $k_\mathrm{B}$ into the definition of a new temperature scale: $$\frac{1}{T_\mathrm{new}}=\left(\frac{\partial S_\mathrm{unitless}}{\partial U}\right)_V$$ where the old and new temperature scales are related as $T_\mathrm{new}=k_\mathrm{B}T$. The new temperature scale then has units of energy.
{ "domain": "chemistry.stackexchange", "id": 13360, "tags": "entropy, units" }
What is the complexity of deciding if the intersection of a regular language and a context free language is empty?
Question: Let's say we have a context free language, the CFG that produces it. And then we have a DFA for a regular language. If the intersection is empty is decidable, but how and how efficiently? And if you could be so kind, please explain the best known algorithm as simple as you can? Answer: The answer depends on how the two languages are given to you; I will assume that the context-free language is given as a CFG, and that the regular language is given as a DFA/NFA. Beigel and Gasarch show how to compute a CFG for the intersection in polynomial time. You can then check whether the resulting language is empty using grammar simplification (described in any decent textbook).
{ "domain": "cs.stackexchange", "id": 9692, "tags": "complexity-theory, formal-languages" }
Are wormholes evidence for traversal of a higher dimension?
Question: Warning, pop science coming.. please correct what I’m getting wrong. Einstein’s equations of relativity showed the potential for existence of wormholes that can connect different points in space time. I understand the mechanisms for their practical implementation are nothing near feasible. However, based on the equations of gravitational “tunneling”, I can traverse back and forth between times and locations. Wouldn’t this require a higher dimension than 4d space time? That is, we’re moving from a point that we would think of as the present to another point we would think of as the present. If this were feasible, Would These “presents” need to be on a traversable continuum? To my lay brain, This seems as though there are points along a higher dimension where what we would consider the future is currently present, and what we consider the past is also present. That the world we see is determined and laid out as slices in a higher dimension that would be traversed with a wormhole, and that we normally traverse in a single direction. Answer: Wormholes in GR do not require higher dimensions. It easier to imagine curved spacetime as being embedded in higher dimensions, but the usual mathematical description of curved spaces does not require that.
{ "domain": "physics.stackexchange", "id": 77196, "tags": "general-relativity, causality, determinism, wormholes, time-travel" }
Help me understand Lepton universality
Question: I understand that we have three pairs of Leptons (generations). $(\nu_e , e^-), (\nu_{\mu}, \mu), (\nu_{\tau}, \tau^-)$ But what is this principle of Lepton universality? I have obviously tried googling it before asking in here, but all of the sites tend to just quickly run over it and I am not quite sure I understand exactly what it covers. So can someone please help me understand it or maybe give me a good source. Answer: When energy in a given reaction is much larger than the masses of all leptons, these masses can be neglected. For example if you consider $W$ bosons decays, then $$M_W \gg m_\tau.$$ Tau is the heaviest among the charged leptons, so if you can neglect mass of the tau, you also can neglect masses of other leptons. But now you neglected the only quantity that distinguish between leptons. Except mass, all other characteristics (spin, charge, etc) are the same for $\tau$, $\mu$ and $e$. Since there is nothing that differentiate between leptons in the massless limit, all the decay rates and cross-sections have to be equal. For example $$\Gamma(W^+\rightarrow e^+ \nu_e )=\Gamma(W^+\rightarrow \mu^+ \nu_\mu ).$$ And this is what we call the lepton universality.
{ "domain": "physics.stackexchange", "id": 64048, "tags": "standard-model, leptons" }
Why do electrons follow the conductors shape?
Question: I'm stuck thinking about this situation. I imagine that there are two oppositely charged objects at short distance $r$, put inside an insulator (Can I say air?). They generate a net elctric field, but since they're apart there's no electrons flowing. Then I connect them with a bizzare conductive wire and electrons start flowing till balance is reached. I wonder: How and why the electron flow will follow the shape of the wire? Is it beacause the net electric field? Can we say that they prefer to follow a strange path through a conductor than a straighter ,shorter one thorugh an insulator? I tried to make a picture of the situation I describe. Since for my understanding it's easier to picture electrons flowing, I drew a reverse electric field so that charge will move from low to to high potential (hope I can do that!). I hope it doesn't seem a too awkward question. If there is any problem with this question please make a comment and I will try to edit it. If there's any problem you want address don't hesitate. Answer: Not awkward question at all. I've had the same doubt and such details are important. The thing is that electrons by all means won't cross an insulator. That takes immense energy or voltage (happens in lightning strikes). When an electron reaches an edge of the conductor, it is therefore stopped. Then the next electron arrives. They repel each other and will move anywhere where there is room. (They can't move forward, because the insulator prevents motion more than the repulsion pushes. And they can't move backwards, from where more electrons arrive. So they move sideways along the curved wire. Everytime it reaches a corner or edge, this happens. This guides the electrons along the conducting path nomatter the shape. If you cut the wire, so the electrons reach an end having nowhere to go, they have no choice but to stop and fight back against the incoming electrons. Soon several electrons gather at this dead end and the accumulated electric field becomes large enough to counteract the voltage that makes them move in the first place. The forces balance and everything stops moving. This is an open circuit.
{ "domain": "physics.stackexchange", "id": 39745, "tags": "electricity, electric-circuits, electrons, electric-current, conductors" }
Controlling the outcome of a quantum measurement through translational entanglement
Question: According to the paper: A. S. Parkins and H. J. Kimble, Phys. Rev. A 61, 52104 (2000). http://pra.aps.org/abstract/PRA/v61/i5/e052104 You can entangle position and momenta of two atoms by using entangled light. Atoms A and B are located at sites A and B. You get entangled light by parametric-down-conversion and send one photon to atom A and the corresponding entangled photon to atom B. You keep on sending entangled pairs until atom A and B come to a steady-state and they will be entangled with each other. Essentially what is happening is that "entanglement is transferred from a pair of quantum-correlated light fields to a pair of trapped atoms in a process of quantum state exchange." The atom is trapped in the x direction. The atom is stuck in a harmonic potential. The entanglement is only in x direction. position are anti-correlated between two atoms: (q1 = -q2) and momentum are correlated (p1 = p2) You can measure the position and momenta of each particle using homodyne measurement techniques. So, if I were to measure the position of atom A very very precisely, then its momentum would be very uncertain. Its distribution would be so smeared out that it'd be approaching a uniform distribution of momentum, if position is a sharp delta function. At this point, there would exist a probability that the momentum is very large, and thus its kinetic energy would be very large too! KE = p^2/2m. (Need a checking here) Would there be a non-negligible probability that the energy of the atom would be so large it would escape the harmonic oscillator trap?! Since the two atoms are entangled, p1 = p2, thus atom B would also have correlated momentum which translates to correlated kinetic energy. Atom B would also have enough energy to escape the harmonic trap as well. If everything I have stated is correct, haven't I found a method to communicate via translational entanglement, because I can control the outcome of the measurement? (Unlike spin...) Let's say we have an ensemble of these entangled atoms. A scientist at Lab A can send a binary message by choosing to measure the position of an atom at Lab A to high precision thereby causing the atom at Lab B to escape the harmonic trap. Not measuring would keep the atoms trapped. The presence or absence would correspond to a 1 or a 0. Since there would exist chances that the momentum would not be large enough to cause an escape, then we could have batches of entangled atoms that would represent one bit. Very costly, but you can interact instantaneously over large distances... Answer: It has already been proved a long time ago that quantum entanglement cannot be used to transmit information faster than light, by any mean. I'm sure this is no exception (you control the next outcome of the measurement in spin entanglement too), but I'm not sure of how to prove it. Here is my guess: Once the atoms are entangled, you can't touch them without breaking the entanglement. You have the two atoms in the same state, with low knowledge about position and momentum, then you separate them and they keep being entangled. Should you measure one's position very precisely then you break the entanglement, and you know atom B's position very precisely as well, but this doesn't associate with momentum uncertainty. I think I remember from my lessons of quantum metrology that since you are using two atoms to extract your measurements, you can bypass Heisenberg uncertainty principle, knowing both position and momentum of the atom (before measurement, of course) with great precision. You can do the same thing with lasers measuring number of photons and phase (which are conjugated variables). Don't you have an (ex-)advisor to ask this to? He/She could definitely help you out with this. Otherwise you can send an email to Fabio Sciarrino (you can google him), he's a very friendly researcher on experimental quantum information theory I had the pleasure of studying with for some time. He definitely knows this.
{ "domain": "physics.stackexchange", "id": 5131, "tags": "momentum, quantum-entanglement, measurements, measurement-problem, faster-than-light" }
ROS on Ubuntu 21.04
Question: Hi! Does anybody have installed ROS on Ubuntu 21.04 hirsute? Could you please guide me through the process of ROS installation on Ubuntu 21.04 Thank you in advance Originally posted by Amigo on ROS Answers with karma: 13 on 2021-06-14 Post score: 1 Answer: Which ROS version/distribution do you want to use ? As far as I know there is no ROS version supported for ubuntu 21.04 yet. See reps for more details: For ROS1: https://www.ros.org/reps/rep-0003.html For ROS2: https://www.ros.org/reps/rep-2000.html You can also check distributions link: For ROS1: http://wiki.ros.org/Distributions For ROS2: https://docs.ros.org/en/galactic/Releases.html Originally posted by Youssef_Lah with karma: 195 on 2021-06-15 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by Amigo on 2021-06-15: Hi @Youssef_lah, thanks for your answer, I've installed fresh copy of Ubuntu 21.04 on my desktop computer just to work with ROS. I wonder when would there be a release of ROS for this version of Ubuntu.. I wanna get into learning more about ROS, unfortunately it is not possible at this point of time :( Comment by Youssef_Lah on 2021-06-15: You can simply delete ubuntu 21.04 and install 20.04 then install ROS1 noetic or ros2 (foxy or galactic), then start learning/practicing, you don't have to wait for a new version for 21.04. Comment by Amigo on 2021-06-16: thank you @Youssef_Lah for the advice.. I'll install Ubuntu 20.04 on my machine, I'm also new to Robotics. Could you please suggest me a good book on Robotics.. I'm a software engineer who is also interested in Robotics. Currently I'm taking Stanford's(Oussama Khatib sir's) "Introduction to Robotics". I would be grateful if you could suggest a book :) Comment by Youssef_Lah on 2021-06-16: I started learning ROS by reading/practicing this book: http://wiki.ros.org/Books/ROS_Robot_Programming_English Comment by Amigo on 2021-06-16: Thank you sir. Take care & Have a nice day! :)
{ "domain": "robotics.stackexchange", "id": 36523, "tags": "ros2, rosinstall" }
LPTHW - ex48 - Handling exceptions and unit testing
Question: This is my solution to the exercise 48 of Learn Python the hard way by Zed Shaw. Please visit the link for testing suite and requirements. I'm worried about my the word banks I have created (COMPASS, VERBS, etc...) as this seems a duplication from the testing code. I'm also afraid my handling might have gone overboard as all I needed it for was finding out which string items could be converted to integers. 1 COMPASS= ['north','south','east','west']$ 2 VERBS = ['go', 'kill', 'eat']$ 3 STOPS = ['the', 'in', 'of']$ 4 NOUNS = ['bear', 'princess']$ 5 $ 6 import string$ 7 def scan(sentence):$ 8 action = []$ 9 #TODO: it seems there might be a way to combine these if statments$ 10 for word in sentence.split(' '):$ 11 lword = string.lower(word)$ 12 try:$ 13 if lword in COMPASS:$ 14 action.append(('direction', word))$ 15 elif lword in VERBS:$ 16 action.append(('verb', word))-$ 17 elif lword in STOPS:$ 18 action.append(('stop', word))$ 19 elif lword in NOUNS:$ 20 action.append(('noun', word))$ 21 elif int(lword):$ 22 action.append(('number', int(word)))$ 23 except:$ 24 action.append(('error', word))$ 25 return action$ 26 $ Answer: lword = string.lower(word) There is no need to import the string module for this, you can use: lword = word.lower() As for this: elif int(lword): Probably better as elif lword.isdigit(): As for your exception handling except: action.append(('error', word)) You should almost never use a bare except. You should catch a specific exception, and put a minimal amount of code in the try block. In this case you could have done something like try: actions.append( ('number', int(word) ) except ValueError: actions.append( ('error', word) )
{ "domain": "codereview.stackexchange", "id": 1723, "tags": "python, unit-testing, exception-handling" }
General coordinates and orthogonality
Question: To my understanding, generalized coordinates may be chosen so that the basis vectors have components in other basis vectors. They may even be parallel and can basically look however one would like them to look. Why is it then that: $$\frac{\partial x^{\mu}}{ \partial x^{\nu}}~=~\delta^{\mu}_{\nu}\ ?$$ This would seem to be false if there was some component of $x^{\mu}$ in $x^{\nu}$. Answer: The notation $$ \frac{\partial}{\partial x^\nu}f(x) $$ means "derivative with respect to $x^\nu$ while keeping the rest of coordinates fixed". Therefore, $$ \frac{\partial x^{\mu}}{ \partial x^{\nu}}=\delta^{\mu}_{\nu} $$ holds by definition.
{ "domain": "physics.stackexchange", "id": 38009, "tags": "differential-geometry, coordinate-systems, differentiation" }
Average and lowest degrees of kinship/consanguinity among humans?
Question: I would appreciate insight into the average, median, RMS or any similar measure of relatedness among the current world population - and perhaps something about how rapidly this may be changing. A similar question is how un-related any two humans can be: i.e., what is the lowest degree of consanguinity between the two most distantly related people. The context is an exploration of how humans have evolved tendencies toward racism and other in/outgroup distinctions, when all humans share such a large fraction of DNA with each other, very nearly as much with non-human primates, and about half even with fruit flies. Might be some helpful lessons in there! Apologies if the question is ill-formed, or answer readily available someplace - I've browsed the Web for several years on this topic, and found nothing I could understand.. Accessible material about most recent common ancestor and identical ancestor point seems to indicate a wide range of both methodologies and results, based mainly on statistical simulations since "hard" genetic structures apparently do not persist. MRCA datings based on mitochondrial and other genetics seem to line up with human behavioral modernity, ca. 200 kya. But there seem to be extreme estimates as recent as 2.3 kya, which implies a lot of mobility and high fertility by some not-so-distant forebears (like Genghis Khan). In any case, IAP may be a better starting point for this shared-anncestry question. I'm guessing that no human is further than about a 10th~12th cousin to any other. Can't be more that 32nd, since 2^33 is more than the number of living humans! All thoughts, including guesses more informed than mine, will be appreciated. ~ ~ ~ ~ ~ Addendum - tried to post this as an answer, but it was deleted: MANY thanks to Zo-Bro-23 for the time, effort and creativity to create his response. I hope it is well-indexed for future explorers to find! In case useful or interesting to anyone,or sparks further contributions, here were/are my main motivations for this inquiry: Though having working knowledge of physical and natural sciences, and having minored in anthropology at Uni, I have never really grasped current estimations of "most recent common ancestor" and "identical ancestors point." This is probably due to my poor understanding of populations statistics, but may also reflect the huge uncertainty in and conflict between various MRCA and IAP methodologies. So I'm always seeking a simpler, if less precise, way to understand these propositions. Time-dependent mean/max degree of consanguinity seems like such a heuristic. The ancestor cone, as mentioned in the Quartz article linked by Zo-Bro-23: I've been using the same term for decades, so I suppose it's a natural way to describe something that isn't precisely conic or biconic. For me, the description follows naturally from the notion of the light (bi-)cone in Minkowski diagrams, extending the idea of a photon's world-line to that of a genomic cluster. I've long wondered about both genetic and memetic (e.g. religion, language, cuisine, art, toolmaking) contingency as they result from and induce changes in the topology of the ancestor cone. There are numerous groups that for geographic, religious or other reasons have been largely independent since we all left the Great Rift Valley (or wherever humanity arose). Having studied a bit of cultural diffusion, I've wondered if genetics, linguistics or anything else might give insight into how often and effectively the occasional Marco Polo or Squanto voluntarily or involuntarily acts as emissary between cultures. Many of these exchanges have apparently had significant cultural influence, and in some cases perhaps genetic as well. It might take very few such, over hundreds or thousands of generations, to lower substantially the degree of consanguinity of all humans. This does not even count the much larger migrations, both voluntary (e.g. Bering Strait crossings that peopled the Americas), semi-voluntary (the Irish famine that so strongly influenced US culture) or entirely unwillingly (the African slave trade). Another question concerns the influence of sex-linked characteristics in Africans on the general American population, where for several centuries, the great majority of interbreeding appears to have been between White males and (usually unwilling, one imagines) Black females. Perhaps too hot a topic for research? Pedigree collapse is a conflating factor of special interest to me, as most of my known ancestors belonged to a relatively small ethnic group that favored in-marriage, and (due to low social standing) was not attractive for outsiders to join. So other than couplings of necessity (rape, isolated populations), our cone is probably pretty narrow down to the bottom of the historic timeline. Not directly related by mechanism but probably relevant is the Galton–Watson process, the extinction of family names when a named lineage runs out of (usually male now; perhaps less so in "primitive" matriarchal societies) descendants. Not long ago, it was popularly believed that explosive population growth since the industrial revolution resulted in the number of living humans exceeding the number deceased. Current credible estimates seem to place the latter at slightly more than 100 bn, - a dozen ghosts standing behind each living soul. Theme for an SF movie? There is also the question of speciation. It is widely accepted that Homo sapiens and Homo neanderthalensis interbred to a considerable degree. This would, of course, violate pre-cladistic notions of what constitutes a species. (The cited Quartz article could have been titled "Everyone on Earth is actually your cousin - including some non-humans." And if all humans are at least 15th cousins, how far are we from other living families, even phyla?) I believe that barring "lost world" scenarios, we have been alone on the planet for all of historic time, hence for the lives of most H. sapiens. But if we date humanity to behavioral modernity, and plot the ancestor cone with a logarithmic or horizontal axis (or even force it into a cylinder of constant width), the influence of some of these factors on present-day consanguinity might be seen as comparable to those of posited bottlenecks like the Toba catastrophe, climate change, pandemics etc.; and on a smaller scale, founder effects like the peopling of the Pacific islands and Australia. Answer: TL;DR: Most humans would be around your 13th cousin to your 15th cousin Please feel free to edit the formatting of my equations to make them look better. Deriving an equation For simplicity's sake, we are assuming that only "blood relations" count. What I mean by this is, your mother's sister's son will count as your cousin, but your father's sister-in-law's daughter will not count as your cousin. Marriage relations will not be counted, and only birth relations will be. Let's tackle this step by step. We are trying to find the degree of kinship when the number of relatives you have in that degree exceeds 8 billion. However, let's start by simply deriving an equation to find the number of cousins you have in a certain degree. For example, your siblings would be degree 0, first cousins would be degree 1, and so on. Let's denote this by the variable n. Now, given a person in any degree of kinship with you, you would share an ancestor with them. That is the definition of kinship. Given a person in degree n with you, your closest common ancestor will be n + 1 generations from you. That means that with your siblings (degree 0), your closest common ancestor is 1 generation from you (your parents). That is simple. Now, we want to find out how many common ancestors you have in that generation. Again, this is fairly simple. That would be 2n + 1. This can be understood by the fact that you have 2 parents, each of them have 2 parents, each of them have 2 parents, etc. The number of ancestors is getting multiplied by 2 each time. However, the number of ancestral couples you have in that generation would be 2n + 1/2 or simply 2n. Now, let's assume that everyone has x number of children. This means that each of those 2n people would have x number of children. The number of children in the next generation would then be 2n * x. Each of those children will have x children, and so the number of children in the next generation would be 2n * x * x or 2n * x2. Hence, the formula for the number of people in your generation with whom you share this common ancestor would be 2n * xn + 1 (since the original ancestors are n + 1 generations above you). However, we are not done yet. Of these children, some of them might be duplicates. What I mean is, you have 4 grandparents. We have assumed that each of them have x children and that each of those children have x children. So the total should be x2 children in the end. However, you count as both your father's child and your mother's child. This means that there will actually be less than x2 children. However, we only care about you, so we can just subtract the places where you are duplicated. This is essentially xn duplicates. Logically, that's because each of your ancestral generation (except the last one) are having only x children instead of 2x, which is value that we get when we count the two parents. So, the final equation is as follows: c = (2n * xn + 1) - (xn + 1) where c is the number of cousins you would have below a certain degree, given that x is the average number of children per person, and n is the degree. A couple of notes on the equation We are subtracting 1 to account for you - You are not your own cousin This will not work for siblings - You will get one less than the actual number Testing the equation If everyone has 3 children, you would have 14 first cousins. Your mother will have 2 siblings, and each of them will have three children. This is a total of 6 cousins from your mother's side. You will also have 6 cousins from your father's side. Added to your 2 siblings, that is a total of 14 (6 + 6 + 2). From the equation we get: c = (2n * xn + 1) - (xn + 1) Since everyone has 3 children, x = 3. Since we are looking at first cousins, n = 1. So the equation is: c = (2n * xn + 1) - (xn + 1) c = (21 * 31 + 1) - (31 + 1) c = (2 * 32) - (3 + 1) c = (2 * 9) - (4) c = 18 - 4 c = 14 This means that we have 14 first cousins (if everyone has three children). Since this is what we arrived logically, the equation must be correct. Although this is not a formal proof, I did go over this in my head for many hours, and it seems to work fine. Please comment if you have any revisions for the equation, and I am more than happy to make an edit. Some caveats Until now, we have assumed that everyone has exactly three children. However, this is not the case. Since we only need an approximate for our final answer, we will be using the average birth rate value. This means that most people will have x number of children. Some might have none, some might have many. However, using this value should give us a fairly good approximation. Also, according to Rutgers anthropology professor Robin Fox, 80% of all marriages in history have been between second cousins or closer. This means that there you will have a lot less cousins than predicted by the equation, since even though we are assuming everyone to have x number of children, two people together are having x number of children, and so the total number of children in a generation will be much lower than predicted. Since inbreeding is so common, there's also a chance that you could be much further from someone than predicted by the equation. Nevertheless, let's try to do some calculation. Before that, we need to think a bit more about population genetics. In India (where I am from), there is mostly only intermarriage (Indians marrying Indians). Although this is starting to change now, it was more or less like that for a long time. Because of this, there is a very low chance that I will be related to an American. We might have a common ancestor, but we might have to go as far as the first Homo Sapien for that. Because, of this, I might be much more related to a random Indian than a random human being. I have done some calculations for you below, but try experimenting on your own: Rearranging the equation to find the degree of kinship We will rearrange this equation to solve for n (the degree of kinship): c = (2n * xn + 1) - (xn + 1) c = (2n * xn * x) - (xn + 1) c ≈ xn (2nx - 1) - We are ignoring the 1 because it is insignificant when xn is large enough c ≈ (2x)nx - We are ignoring the 1 again because 2n is going to be very large for these problems log c ≈ (n log 2x) + log x log c - log x ≈ n log 2x n ≈ $\frac{log c - log x}{log 2x}$ Since we are making some approximations in this rearrangements, we will need to do some trial-and-error work using the original equation in order to ensure that the answer is correct. Calculation 1 - How many degrees is an Indian away from me? Value of x: Average birth rate in India: 2.20 births per woman Value of c: 1.38 billion n ≈ $\frac{log c - log x}{log 2x}$ n ≈ $\frac{log (1.38 * 10^9) - log 2.20}{log 4.40}$ n ≈ $\frac{9.1398790864 - 0.34242268082}{0.64345267648}$ n ≈ $\frac{8.79745640558}{0.64345267648}$ n ≈ 13.6722663953 Verifying our answer Value of n = 13 c = (213 * 2.2014) - (x13 + 1) c = (8192 * 62218.2127343) - (28281.0057883 + 1) c = (509691598.719) - (28282.0057883) c = 509663316.713 ≈ 510 million Value of n = 14 c = (214 * 2.2015) - (x14 + 1) c = (16384 * 136880.068015) - (62218.2127343 + 1) c = (2242643034.36) - (62219.2127343) c = 2242580815.15 ≈ 2.2 billion This means that I have roughly 510 million 13th cousins and roughly 2.2 billion 14th cousins. Thus, the furthest an Indian can be is my 14th cousin. Calculation 2 - How many degrees is another human being from me? Value of x: Average birth rate across the world: 2.3 births per woman Value of c: 7.9 billion n ≈ $\frac{log c - log x}{log 2x}$ n ≈ $\frac{log (7.9 * 10^9) - log 2.3}{log 4.6}$ n ≈ $\frac{9.89762709129 - 0.36172783601}{0.66275783168}$ n ≈ $\frac{9.53589925528}{0.66275783168}$ n ≈ 14.3882105944 Verifying our answer Value of n = 14 c = (214 * 2.2015) - (x14 + 1) c = (16384 * 136880.068015) - (62218.2127343 + 1) c = (2242643034.36) - (62219.2127343) c = 2242580815.15 ≈ 2.2 billion Value of n = 15 c = (215 * 2.2016) - (x15 + 1) c = (32768 * 301136.149634) - (136880.068015 + 1) c = (9867629351.21) - (136881.068015) c = 9867492470.14 ≈ 9.9 billion This means that I have roughly 2.2 billion 14th cousins and roughly 9.9 billion 15th cousins. Thus, the furthest that any human can be is my 15th cousin. Evaluation of results Although these results show that we are all very close (depending on how you evaluate the results) in terms of kinship, there is another factor to consider. Through a phenomenon known as pedigree collapse, it is estimated that we share many common ancestors. That is, our great-great-grandfather from our mother's side might be the same as our great-great-grandfather from our father's side. This means that most people will be further than calculated by this equation. For example, if two cousins had a child, that child would only have six great-grandparents, not eight. Thus, that child will have less cousins than somebody else. We have also not considered marital relationships, and have only considered birth relations. Hence, the actual answer to your question might vary significantly from the values we calculated. To learn more about this field, I would recommend checking out this article. It is very well-written in my opinion.
{ "domain": "biology.stackexchange", "id": 11907, "tags": "population-genetics" }
Where to start on neural networks
Question: First of all I know the question may be not suitable for the website but I'd really appreciate it if you just gave me some pointers. I'm a 16 years old programmer, I've had experience with many different programming languages, a while ago I started a course at Coursera, titled introduction to machine learning and since that moment i got very motivated to learn about AI, I started reading about neural networks and I made a working perceptron using Java and it was really fun but when i started to do something a little more challenging (building a digit recognition software), I found out that I have to learn a lot of math, I love math but the schools here don't teach us much, now I happen to know someone who is a math teacher do you think learning math (specifically calculus) is necessary for me to learn AI or should I wait until I learn those stuff at school? Also what other things would be helpful in the path of me learning AI and machine learning? do other techniques (like SVM) also require strong math? Sorry if my question is long, I'd really appreciate if you could share with me any experience you have had with learning AI. Answer: No, you should go ahead and learn the maths on your own. You will "only" need to learn calculus, statistics, and linear algebra (like the rest of machine learning). The theory of neural networks is pretty primitive at this point -- it more of an art than a science -- so I think you can understand it if you try. Ipso facto, there are a lot of tricks that you need practical experience to learn. There are lot of complicated extensions, but you can worry about them once you get that far. Once you can understand the Coursera classes on ML and neural networks (Hinton's), I suggest getting some practice. You might like this introduction.
{ "domain": "datascience.stackexchange", "id": 6420, "tags": "machine-learning, neural-network, svm" }
Is there any other way to describe work done instead of saying the force applied to move an object through a distance?
Question: I was reading about energy, and I got to know that energy is the ability to do work. Then I read about work, and I found that almost everywhere, resources say that when an object applies force to move another object through a distance, it is said to do work, and the Math is that $W = Fs$(Even the tag on this site which says work). So my first question is: Can work be described in another way instead of saying this? This is not making a complete intuitive sense to me. And, why is it defined like this? What is the reason behind defining something called work? Is there any derivation to it? And I am also not able to understand negative work. Lets recount the example of friction, which opposes the motion of an object, so we say that the work done is negative. But, if I go by the definition, then I think that friction is not making the object move through a certain distance, it is trying to stop it. So what role does work play here? And I saw many physics problems which looked extremely difficult to solve, but with this concept of conservation of energy, they were looking very simple. But still, I am not able to understand the reason behind the formulation. Is this formulation more of an empirical result, or is it something else that I am missing? I am not opposing this, just asking logic behind things. I hope readers are not annoyed with this :) Answer: To back up CuriousOne. A good example would be you sumo wrestling with your friend. Lets say your friend is far bigger/stronger than you. So you begin pushing one another. His force on you is greater and so you are obviously moving in what you would call your -x direction. Both of you move in the -x direction, but of course your exerting a force on him too in your +x direction. Both of your are moving in the -x direction, from your point of view (your x-axis), his greater work is -ve (opposing where you want to go) while your work on him is positive. If you push with less and less force, he would travel further so you were impeding him (doing negative work with respect to his x-axis). If you pushed each other with the same force, you'd both be still so no work done.
{ "domain": "physics.stackexchange", "id": 27873, "tags": "work" }
Follow up to Pay Rate Calculator
Question: This is a Follow-up to Simple pay rate calculator Here is what I came up with. Hopefully if I have missed something that was said in answers on the previous question, we can still get those things pointed out. I am hoping that I made the code cleaner. Main Method class MainClass { public static void Main (string[] args) { //var payCheck = new PayRateCalculator("Hart",40,23); var payCheck = new PayCalculator(); Console.Write ("Please provide the Employees Last Name >>"); payCheck.lastName = Console.ReadLine(); Console.Write ("How many hours has employee {0} worked? >>", payCheck.lastName); payCheck.hours = Convert.ToInt32 (Console.ReadLine()); Console.Write ("What is employee {0}'s hourly wage? >>", payCheck.lastName); payCheck.payrate = Convert.ToInt32 (Console.ReadLine ()); Console.WriteLine ("{0}'s Gross pay is ${1}", payCheck.lastName, payCheck.GrossTotal()); Console.WriteLine ("{0}'s Net pay is ${1}", payCheck.lastName, payCheck.NetTotal()); } } PayCalculator Class class PayCalculator { private const decimal _PayRateDefault = 8.25m; private decimal _payrate = _PayRateDefault; private decimal _withholdingRate = 0.20m; private decimal _hours; public decimal hours { get { if (hours < 1) { return 40; } else { return _hours; } } set { _hours = value; } } public decimal grossPay { get;set; } public decimal netPay { get; private set; } public decimal payrate { get { return _payrate; } set { if (value < 8.25){ _payrate = 8.25m; } else { _payrate = value; } } } public decimal WithholdingRate{ get; set; } public string lastName { get; set; } public decimal GrossTotal () { grossPay = hours * payrate; return grossPay; } public decimal NetTotal () { grossPay = GrossTotal(); netPay = grossPay - (grossPay * _withholdingRate); return netPay; } public PayCalculator (string lastName, decimal hours, decimal payRate) { this.hours = hours; this.payrate = payrate; this.lastName = lastName; } public PayCalculator () { } } I am going to put the beginner tag on this again because I know that my coding skills are still not very good. This will be the last review for this little bit of code. I am going to see what else catches my eye in this book while I am working through it. Answer: There are several things to address here: Naming Conventions: only fields, method parameters, and local variables should be camelCase, everything else should be PascalCase. This doesn't make your programs run better, but it does make you a more likable programmer in the workplace. Being able to produce code that is consistent with your co-workers will help you stay employeed. No one likes a rogue programmer. Validate all external input: in your application the external input is coming from the info provide by the user of your program. It could be a text box in a windows application or data provided to a web service that you have written. In all cases you need to validate the input before processing so that you can provide the user with an appropriate message. In your case there are places where valid values cannot be less than zero. Check for that and alert the user of their mistake. Here is an example that verifies the pay rate and ensures that it is at least the minimum pay rate. decimal GetRate(string lastName) { decimal rate = -1m; string tmp = string.Empty; do { Console.Write("What is employee {0}'s hourly wage? >>", lastName); tmp = Console.ReadLine(); if (!decimal.TryParse(tmp, out rate) || rate < Paycheck.MinimumHourlyRate) { Console.WriteLine("hourly wages must be a valid number greater than {0:#######.00}", Paycheck.MinimumHourlyRate); rate = -1m; // set the rate to an out of bounds value because the TryParse call may have returned false } } while (rate < Paycheck.MinimumHourlyRate); return rate; } Do not let objects be created in an invalid state: This is not always possible to achieve, but you have total control over this code. Your PayCalculator class has computed values for both gross and net pay. Neither of these values make sense with out an appropriate pay rate and number of hours. Therefore all constructors should take, at a minimum, these two values. This ensures that your PayCalculator does not get created in an invalid state. Do not let objects become invalid: You should be checking that values are in range when being assigned. This is different than the validation mentioned earlier. For example negative hours make no sense in this program. You should throw an appropriate exception if someone tries to set the number of hours worked to a negative value: decimal _numberOfHours; public decimal NumberOfHours { get { return _numberOfHours; } set { if (value < 0) throw new ArgumentOutOfRangeException("NumberOfHours", value, "value cannot be less than zero"); } } If you perform proper validation as discussed earlier, this exception should never be thrown. However, the whole point of exceptions is for exceptional situations that are not supposed to occur. Principle of least astonishment: You have places in your code that assume a default value when the supplied value is out of range. For example, in the property setter of payrate: if (value < 8.25) _payrate = 8.25m; A user of your code that tries to assign 1.0m to the payrate property is not expecting that value to magically change to 8.25m. If you refer back to the example for the NumberOfHours property you will see that I am not defaulting a negative value to zero. I am throwing an exception. It is the user's responsibility to provide your PayCalculator class with valid values. In order to facilitate this you should be documenting your code. The NumberOfHours example property should clearly document that it will throw an exception if someone tries to assign a negative value to that property. Conclusion The items explained above should be carried beyond this program and taken into consideration for all programs that you write. They will make you a better programmer in the long run if you adopt them now. There are also a few other things in your code that are incorrect. Those are simply bugs in your code WithholdingRate does not use its backing field You are using Convert.ToInt32 for a decimal value using Convert.ToInt32 or the correct Convert.ToDecimal function can throw exceptions if text is entered. You should code for that. I recommend using decimal.TryParse instead. the payrate property setter checks value < 8.25, 8.25 is going to be a double. double values can lose precision. It won't lose precision in the case of 8.25 but you should be doing 8.25m so that this problem does not arise
{ "domain": "codereview.stackexchange", "id": 9424, "tags": "c#, beginner" }
If earth were the size of a marble would it be smoother than a marble?
Question: As the title states, If earth were the size of a marble would it be smoother than a marble? Answer: No. This What-If page describes the case for 'smoother than a bowling ball" These scans (along with various measurements of ball roughness1 tell us that a high-end bowling ball is quite smooth. If blown up to the scale of the Earth, the ridges and bumps[2] would be between 10 and 200 meters high, and the peaks would be between one and three kilometers apart: By Earth standards, this is quite smooth; our highest mountains are 40 times higher. And it is simple to continue the scaling process to show that the same holds true for marbles. (Although I have not found a good source of data from people who scan marbles...)
{ "domain": "astronomy.stackexchange", "id": 1036, "tags": "earth, size" }
Find the commutator $[AX+BY,Z]$
Question: The problem I'm asked to solve is on quantum mechanics: Find the commutator $[xH+pH, p^2]$, where $H$, $x$ and $p$ are the Hamiltonian, space and momentum operator respectively A the moment, though, I'm stuck on finding the commutator $[AX+BY,Z]$. The correct answer is $A[X,Z]+B[Y,Z]$, though solving it I find: \begin{align} [AX+BY,Z] & = [AX,Z] + [BY,Z] \\ & = A[X,Z] + [A,Z]X + B[Y,Z] + [B,Z]Y \end{align} (used the following properties: $[A+B,C]=[A,C]+[B,C]$ and $[AB,C]=A[B,C]+[A,C]B$) What am I doing wrong? Answer: The identity you've derived is correct and the one in Wikipedia is either wrong or it does an insufficient job at explaining the assumptions it makes. (The Wikipedia page has now been flagged as requiring attention from an expert.) For explicit clarity: if $A,B,X,Y,Z$ are members of a Lie algebra (including, in particular, being operators on a quantum mechanical system, with $[·,·]$ being the commutator), then \begin{align} [AX+BY,Z] & = [AX,Z] + [BY,Z] \\ & = A[X,Z] + [A,Z]X + B[Y,Z] + [B,Z]Y \end{align} is correct.
{ "domain": "physics.stackexchange", "id": 51226, "tags": "quantum-mechanics, homework-and-exercises, operators, commutator" }
Understanding MDP variants and "model-free" RL algorithms
Question: RL is based on MDPs. But MDPs have other useful variants such as Semi MDP (variable time), POMDP (partially observable states) etc. Some industrial problems seem to be better suited for SMDP and/or POMDPs. For example optimizing maintenance, the time between maintenance events is variable. And, some states of equipment may not available for measurement directly -- hence we have "partially observable" states. If I decide to use a model-free algorithm such as DDPG or PPO, the theory of which I think is based on MDP -- am I comprising on the state or model definition? Will it create a less efficient agent? Of course I may not have a choice as creating an accurate environment is out of question. But just trying to understand. Appreciate your thoughts. Answer: The performance of a classical agent strongly depends on how close the observation approximates the true state of the environment. The state is sometimes defined as all we need to know to model the dynamics forward in time. Therefore, the closer it is possible to infer the dynamics of the environment with the information provided, the better the performance. A common case study in RL is Atari. There, one observation (pixel image) alone would be insufficient to make progress at all. But if you stack consecutive images together, then this can provide sufficient information for an agent developed for MDPs like DQN to learn games like Breakout, Space Invaders or Pong. Now, this is not enough for games that require memory, like Montezuma's Revenge, which would be considered as a POMDP. To address partial observability, a general idea is to use a recurrent neural network (often LSTM or GRU) instead of a feed-forward network in the actor and critic to give the agent additional context if required to make good decisions. The use of an LSTM, plus additional implementation tricks, can give strong results in all Atari games (see R2D2). Also, essentially PPO with a (huge) LSTM was also able to reach super-human performance in DOTA2, see OpenAI Five. To take an example in robotics deployed to the real world. Assume we know the positions and velocities of all the joints. Then this information is still partial in practice, as we ignore other factors (like friction, wind speed, terrain). However, it would perform still be able to perform decently. If we had only the positions, then we may be able to infer the velocities by using an LSTM and make it work. If we had only access to the information of one joint, it would probably never work.
{ "domain": "datascience.stackexchange", "id": 11132, "tags": "reinforcement-learning, markov-hidden-model" }
Cellular Immunity Response
Question: What response of cellular immunity would appear after complementary protein activation to keep pathogenic bacteria away from our body? Notes: I do appreciate your explanation. Though I am facing multiple choices: (a) wrapping the bacteria cell (b) immobilize the bacteria cell (c) destroying the bacteria cell by enzyme (d) creating pores on the bacteria cell The key answer is (c) but then I do not know what enzymes make this happens. Any Idea? Answer: As you can simply find on Wikipedia, the cellular immunity, or Cell-mediated immunity, is an immune response that does not involve antibodies, but rather involves the activation of phagocytes, antigen-specific cytotoxic T-lymphocytes, and the release of various cytokines in response to an antigen. The answer (c) (destroying the bacteria cell by enzyme) refers to the phagocytes, cells that contain many enzymes, like lysozymes. Those funny enzyme can destroy the bacteria cell's wall. Actually, I think that the answer (d) could be right, but it's not complete. In fact not every enzymes works on the cell's wall!
{ "domain": "biology.stackexchange", "id": 3754, "tags": "homework, immunity" }
Doubly linked list first fit free list malloc/free in Python
Question: As an exercise I've implemented malloc and free in Python as a first fit free list as described here. This tracks which blocks are free in a doubly linked list that is sorted by the address of the first byte of the block. It also keeps track of the size of each allocated block in a dictionary. The time complexity is \$O(N)\$ for both malloc and free, where \$N\$ is the number of free blocks. from typing import Optional class Block(): def __init__(self, size: int, address: int): self.address: int = address self.size = size self.prev: Optional[Block] = None self.next: Optional[Block] = None def __str__(self): return f"(start {self.address}, size {self.size})" def __repr__(self): return f"(start {self.address}, size {self.size})" class Heap(): def __init__(self, size: int): self.size = size self.head: Optional[Block] = Block(size, 0) self.allocation_headers: dict[int, int] = {} def num_free_blocks(self) -> int: block = self.head total = 0 while block: total += 1 block = block.next return total def free_size(self) -> int: block = self.head total = 0 while block: total += block.size block = block.next return total def total_size(self) -> int: free = self.free_size() allocated = sum(self.allocation_headers.values()) return allocated + free def malloc(self, size: int) -> int: if size <= 0: raise Exception # Sort the linked list by address block = self.head while block: if block.size >= size: self.allocation_headers[block.address] = size return_address = block.address if block.size == size: # remove the block if block.prev: block.prev.next = block.next if block.next: block.next.prev = block.prev if self.head == block: self.head = block.next else: # make the block smaller block.address += size block.size -= size return return_address block = block.next raise Exception def free(self, ptr: int): """ Take an address pointer ptr and free it. All we need to do to free is to add an element to the free list. """ free_size = self.allocation_headers[ptr] del self.allocation_headers[ptr] prev = None block = self.head while block and block.address < ptr: prev = block block = block.next new_block = Block(free_size, ptr) if not self.head: self.head = new_block if prev: prev.next = new_block new_block.prev = prev if block: block.prev = new_block new_block.next = block if self.head == block: self.head = new_block # coalesce next if new_block.next and new_block.next.address == (new_block.address + new_block.size): new_block.size += new_block.next.size new_block.next = new_block.next.next if new_block.next: new_block.next.prev = new_block # coalesce prev if new_block.prev and new_block.address == (new_block.prev.address + new_block.prev.size): new_block.prev.size += new_block.size new_block.prev.next = new_block.next if new_block.prev.next: new_block.prev.next.prev = new_block.prev Example usage: def test_case_1(): heap = Heap(1000) assert (heap.total_size() == 1000) assert (heap.num_free_blocks() == 1) assert (heap.free_size() == 1000) a = heap.malloc(100) assert (heap.num_free_blocks() == 1) assert (heap.free_size() == 900) assert (a == 0) b = heap.malloc(500) assert (heap.num_free_blocks() == 1) assert (heap.free_size() == 400) assert (b == 100) try: heap.malloc(950) except: pass heap.free(b) assert (heap.num_free_blocks() == 1) assert (heap.free_size() == 900) heap.free(a) try: heap.free(a) except: pass heap.malloc(950) print("Test case 1 succeeded!") I believe this implementation is correct as I've fuzzed it with hundreds of millions of random inputs and all of these conditions were maintained: def perform_checks(self): forward_blocks: list[Block] = [] forward_addresses: list[int] = [] reverse_blocks: list[Block] = [] reverse_addresses: list[int] = [] tail = None block = self.head forward_length = 0 forward_free = 0 last_address = -1 while block: # zoom to the end assert (block.address > last_address) forward_addresses.append(block.address) forward_blocks.append(block) forward_length += 1 forward_free += block.size if not block.next: tail = block last_address = block.address block = block.next reverse_length = 0 reverse_free = 0 last_address = 1000000000 if self.head is not None: assert (tail is not None) while tail: assert (tail.address < last_address) reverse_blocks.append(tail) reverse_addresses.append(tail.address) reverse_length += 1 reverse_free += tail.size last_address = tail.address tail = tail.prev assert ( forward_length == reverse_length ), f"Forward length of {forward_length}, Reverse length of {reverse_length}" reverse_blocks.reverse() assert (forward_blocks == reverse_blocks) assert (forward_free == reverse_free) assert (self.total_size() == self.size ), f"Total size {self.total_size()}, size {self.size}" assert (forward_length == len(forward_addresses)) However, I would like suggestions on improving the simplicity of the code. The time complexity and block fragmentation could be improved by switching to a best-fit red/black tree, but let's ignore that. Assuming a flat left-to-right heap, how can we improve this and maintain the \$O(N)\$ time complexity? Could the code be dramatically cleaned up by using a Python built-in data structure? Answer: Trivial DRY: __repr__() should simply return __str__(). If in future they diverge, then so be it. Type hint nit: self.address: int = address Maybe we don't need the int hint? Since address very helpfully offers it. And yes, definitely keep the hint in the signature. It is most visible and most helpful there. In general, I am loving your optional type hints, keep it up! I imagine mypy would be happy with such an input file. If I could criticize the whole Optional[Block] thing for a moment. Maybe have your machinery always allocate a 1-byte block? So there is always something in there? (Yeah, I know, we leaked one byte of storage. We'll get over it.) self.allocation_headers: dict[int, int] = {} I confess I do not understand the meaning of that identifier. It seems to me that self.allocation would be the natural name for such a mapping. I have no idea what the usage patterns on num_free_blocks() and free_size() are. If, in some realistic workload, profiling reveals they are called "a lot", consider making them take O(1) constant time. That is, we might choose to have the mutators maintain statistics which could be immediately returned. But let's return to the current O(n) linear implementation. The two functions are nearly identical. I am a little bit sad that we don't have some convenient iterator or other helper that they could rely on. I am reading malloc. if size <= 0: I am reading the spec If the size of the space requested is 0, the behavior is implementation-defined: the value returned shall be either a null pointer or a unique pointer. Recommend you name your function something other than malloc if you're not keen to support malloc semantics. Also, we can do much better than raising a generic Exception. Go to the trouble of subclassing it so caller will see a diagnostic error specific to your module. In particular, caller's try clause should be able to focus on just an AllocationError. This is correct: if block.size == size: # remove the block But consider relaxing that equality criterion. Suppose that caller is creating random strings with length between 10 and 20 characters that must be allocated. If we discretize allocations to, say, multiple of 4, then the if would find more opportunities to remove a block. In deployed systems, discretizing to logarithmic boundaries tends to be more practical. In free there is a while loop and several ifs. Please push them down into a helper function which computes prev and block and does some mutation. Thank you for providing automated unit tests. Please express the +inf constant as 1_000_000_000 or int(1e9). Please improve PEP-8 readability, perhaps by running the source through black. I've fuzzed it with various random inputs Potential off by one errors abound. Consider making hypothesis part of your test suite. Certainly it has taught me amazing things I never would have believed before. Overall? This is good code that achieves its design goals. I would be happy to accept or delegate maintenance tasks for it. Coalescing allocated chunks, at logarithmic granularity, seems the biggest opportunity for combating the somewhat ugly O(n) "linear cost with size of free list" overhead. There is a rich literature describing slab, buddy, and related allocators, likely far beyond the interests of this project. Pick an example workload, benchmark these library functions, describe the results, and see where you'd like to go from there!
{ "domain": "codereview.stackexchange", "id": 44403, "tags": "python, linked-list, memory-management, complexity" }
Approximate time sync callback delay issue
Question: I have a ros bag with 6 pairs of stereo image. I can subscribe to two (left & right) image topics using approximate time sync policy. The unusual thing I noticed is that the last image pair is some how not reaching the call back function. Here is the brief code message_filters::Subscriber<Image> image1_sub(nh, "/cam0/image_raw", 100); message_filters::Subscriber<Image> image2_sub(nh, "/cam0/image_raw", 100); typedef sync_policies::ApproximateTime<Image, Image,> MySyncPolicy; Synchronizer<MySyncPolicy> sync(MySyncPolicy(1000), image1_sub, image2_sub); sync.registerCallback(boost::bind(&callback, _1, _2,)); //callback is below inline void callback(const ImageConstPtr& image1, const ImageConstPtr& image2){ ROS_INFO("Sync_Callback"); // I'm simply counting the print out to see how many times the call bask gets a hit } I've tried it with image_transport::SubscriberFilter with exact outcome. Another unusual thing: while the node is still alive I played a different bag file and the image pair from previous bag (the last one) pops out. It seems as if it is somehow waiting for another message to arrive. This is different than my understanding of how callbacks work. I'm stuck and would appreciate any help Originally posted by avcx on ROS Answers with karma: 26 on 2018-04-27 Post score: 1 Answer: I think the approximate time synchronizer is waiting until the next image on either topic arrives before it determines that the current pair should be published: http://wiki.ros.org/message_filters/ApproximateTime Note the description of Inter message lower bound Inter message lower bound: if messages of a particular topic cannot be closer together than a known interval, providing this lower bound will not change the output but will allow the algorithm to conclude earlier that a given set is optimal, reducing delays. With the default value of 0, for messages spaced on average by a duration T, the algorithm can introduce a delay of about T. With good bounds provided a set can often be published as soon as the last message of the set is received. An incorrect bound will result in suboptimal sets being selected. A typical bound is, say, 1/2 the frame rate of a camera. Originally posted by ahendrix with karma: 47576 on 2018-04-29 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by lucasw on 2023-05-27: Something like this may work depending on expected update rates and expected slop between the messages to be synchronized (both in their header stamps and in the actual arrival times at the sync subscriber): approx_sync.setMaxIntervalDuration(ros::Duration(max_interval)); const float lower_bound = max_interval * 2.0; sync.setInterMessageLowerBound(0, ros::Duration(lower_bound)); sync.setInterMessageLowerBound(1, ros::Duration(lower_bound));
{ "domain": "robotics.stackexchange", "id": 30750, "tags": "ros, message-filter, ros-kinetic" }
Best query complexity of Goldreich-Levin / Kushilevitz-Mansour learning algorithm
Question: What is the best known query complexity of Goldreich-Levin learning algorithm? Lecture notes from Luca Trevisan's blog, Lemma 3, states it as $O(1/\epsilon^4 n \log n)$. Is this the best known in terms of dependence on $n$? I will be particularly grateful for a reference to a citable source! Related question: what is the best known query complexity of Kushilevitz-Mansour learning algorithm? Answer: The question seems somewhat under-specified in the sense that it did not specify the desired error probability of the procedure. Assuming one means constant error probability, then the above is indeed the best I know. For a detailed discussion see Sec 2.5.2.4 in my book "The Foundations of Cryptography - Volume 1" available at http://www.wisdom.weizmann.ac.il/~oded/foc-vol1.html THE ABOVE IS WRONG. SEE CORRECTED ANSWER BELOW. Prop 2.5.6 in the aforementioned section proves a much better bound: The algorithm runs in expected time $O(n \log^3(1/\epsilon))$ times the running time of the guessing procedure (see improvement from $n^2$ to $n$ in the comment right after the proof) and is correct w.p. $\Omega(\epsilon^2)$. Hence, correctness w.p. $2/3$ is obtained in time (factor) ${\tilde O}(n/\epsilon^2)$, which is optimal in some sense (see Exer 30).
{ "domain": "cstheory.stackexchange", "id": 2737, "tags": "cc.complexity-theory, reference-request, lg.learning, fourier-analysis" }
Why use cosine similarity instead of scaling the vectors when calculating the similarity of vectors?
Question: I'm watching a NLP video on Coursera. It's discussing how to calculate the similarity of two vectors. First it discusses calculating the Euclidean distance, then it discusses the cosine similarity. It says that cosine similarity makes more sense when the size of the corpora are different. That's effectively the same explanation as given here. I don't see why we can't scale the vectors depending on the size of the corpora, however. For example in the example from the linked question: User 1 bought 1x eggs, 1x flour and 1x sugar. User 2 bought 100x eggs, 100x flour and 100x sugar User 3 bought 1x eggs, 1x Vodka and 1x Red Bull Vector 1 and 2 clearly have different norms. We could normalize both of them to have length 1. Then the two vectors turn out to be identical and the Euclidean distance becomes 0, achieving results just as good as cosine similarity. Why is this not done? Answer: Let $u, v$ be vectors. The "cosine distance" between them is given by $$d_{\cos}(u, v) = 1 - \frac{u}{\|u\|} \cdot \frac{v}{\|v\|} = 1 - \cos \theta_{u,v},$$ and the proposed "normalized Euclidean distance" is given by $$d_{NE}(u, v) = \left\| \frac{u}{\|u\|} - \frac{v}{\|v\|} \right\| = d_E(\frac{u}{\|u\|}, \frac{v}{\|v\|}).$$ By various symmetries, both distance measures may be written as a univariate function of the angle $\theta_{u,v}$ between $u$ and $v$. [1] Let's then compare the distances as a function of radian angle deviation $\theta_{u,v}$. Evidently, they both have the same fundamental properties that we desire -- strictly increasing monotonicity for $\theta_{u,v} \in [0, \pi]$ and appropriate symmetry and periodicity across $\theta_{u,v}$. Their shapes are different, however. Euclidean distance disproportionately punishes small deviations in the angles larger than is arguably necessary. Why is this important? Consider that the training algorithm is attempting to reduce the total error across the dataset. With Euclidean distance, law-abiding vectors are unfairly punished ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi/12) = 0.125$), making it easier for the training algorithm to get away with much more serious crimes ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi) = 1.000$). That is, under Euclidean distance, 8 law-abiding vectors are just as bad as maximally opposite-facing vectors. Under cosine distance, justice is meted out with more proportionate fairness so that society (the sum of error across the dataset) as a whole can get better. [1] In fact, $d_{\cos}(u, v) = \frac{1}{2} (d_{NE}(u, v))^2$.
{ "domain": "datascience.stackexchange", "id": 11137, "tags": "machine-learning, nlp, clustering, similarity" }
What is the fundamental definition of force?
Question: As I pick up more physics I see that the definitions of force commonly provided in books and classrooms are misleading. "A force is a push or pull." This seems to be a "correct" definition but it doesn't provide enough information. "A force is the influence of one body on another." This is not sufficient because as other people have pointed out to me, force is more so the relationship between two bodies as opposed to how one acts on another. This is more evident with forces such as electricity and gravity. "$\vec{F} = m \cdot \vec{a}$." My understanding is that this is not a mathematical definition, but rather a scientific observation. Rigorous application of the scientific method led us to conclude that the relationship between force and acceleration is proportional, and the constant of proportionality is the mass of the given object. It's not a definition in the sense that we define velocity as displacement over time. Can someone please provide an intuitive, natural definition which describes the inherent behavior between objects/bodies in the physical world? I understand that there are many different kinds of forces but since we call them all "forces" there must be a good way of defining all of them in a singular manner. Answer: In Newtonian Mechanics In Newtonian mechanics, a force is a mathematical vector we prescribe onto a model of a physical system by declaring a force law. In other words, it's an intermediate mathematical gadget we invoke to do calculations in our models. It is invoked between the inputs (initial conditions) and outputs (predictions) of data but it is never measured directly (time, position, velocity, etc. are what are ultimately recorded directly). To put it more bluntly, it is a primitive notion that can't be reduced further unless you step outside Newtonian mechanics. A force is supposed to be a mathematical concretization of what we intuitively regard as a "push or a pull" but there isn't going to be a perfect correspondence between the notion of forces and "pushes or pulls." It is only a rigorous refinement of those intuitive notions. This is similar to how the concepts of a line or a point used to be primitive concepts in Euclidean geometry until the advent of real analysis and set theory. Prior to the widespread adoption of coordinates, the notion of a line was a mathematical primitive that corresponded to our intuitive notion of lines. The best we could do isn't to give a direct definition, but to give an axiomatic characterization of what we consider a line to be and work from there. This is also similar to how the wavefunction is invoked as a mathematical gadget to do calculations for models of quantum systems; the wavefunction is also invoked between inputs and outputs but it is never directly measured. (Of course, I should say that unlike forces and lines, there is no intuitive thing in the real world that we can point to that the wavefunction corresponds to, which makes the situation in quantum mechanics more difficult to deal with. Moreover, while the concept of a force has a direct correspondence with position measurements through a spring, balance, or a general force gauge, the concept of a wavefunction doesn't have a direct correspondence with any specific measurement unless you take an ensemble of identically prepared systems.) Consider the example below. Example. Suppose I want to model a binary star system. I model the two stars as point objects with masses $m_{A}$ and $m_{B}$, and then I appeal to Newton's law of universal gravitation to specify the force as $$ \vec{F}_{\text{A on B}} = -\frac{Gm_{A}m_{B}}{r^{2}}\hat{r} $$ where $\vec{r}$ is the vector from star $A$ to star $B$. This is something I put into my model, because this law has been successful for astronomical predictions. Another example is given below. Example. Suppose I want to model a harmonic oscillator put into a fluid with some drag. Then I postulate two force laws: the spring force $$ \vec{F} = -k(\vec{x} - \vec{x}_{0}), $$ and the linear drag force $$ \vec{F} = -b\vec{v} $$ where $b, k$ are some positive constants. Some other notes: Something else that is important to understand is that neither Newton's first nor second law are used to define what is a force. Their roles are merely to relate force to motion. It's the force law specific to the situation that specifies the force, and then Newton's laws relate it to motion. Newton's first and second laws are not definitions of force so much as they are axiomatic characterizations of the relationship between force and motion. There is a subtle difference, because at no point do we say "a force is defined as blah-blah-blah" in either of the laws. The role of Newton's first and second laws are to relate force to the motion of objects, and in the process of doing this they elucidate what it means for a force to be "a push or a pull" or to be "an influence of one body on another." Newton's third law is different from the other two laws, because unlike the first two laws the third law gives a constraint on what the possible force laws (which are the things that specify what the force is in a given scenario) there can be. In many cases, we actually ignore this law to simplify our models (for example when we consider a spring attached to a wall, we simplify our scenario by ignoring the fact that the motion of the spring imparts some momentum to the Earth). Some forces are "more fundamental" in the sense that we can derive other forces from the more fundamental ones. For example, the spring and drag forces come from more elementary forces that act on the molecules of the substances. If you come across a new situation that no one else has analyzed, you will have to take apart the situation and see what mechanisms are at play that you are familiar with. From this you can deduce the force laws at play. The force laws can come from other theories such as electromagnetism where force is defined by the electric and magnetic fields (which are yet another slew of mathematical gadgets we invoke). In the worst case scenario where you can't take the situation apart, you guess the force law and empirically test whether or not your guess leads to correct predictions. In Lagrangian Mechanics In different theories, we start out with different primitive notions. If we start with Newtonian mechanics, then mathematically speaking force is going to have to be a primitive notion. If we start with Lagrangian mechanics, then the Lagrangian $\mathcal{L}(q, \dot{q}, t)$ will be the primitive notion, and force will be defined as $$ F_{i} = \frac{\partial\mathcal{L}}{\partial q_{i}}. $$ For $\mathcal{L} = T-U$, force ends up being defined as the negative gradient of potential energy: $\vec{F} = -\nabla U$. Other Theories Note that outside of any specific theory, the word "force" doesn't have a rigorous definition, and it is mostly an intuitive concept. Also, the word "force" can be attached to different things in different theories (so in the broader sense it's an evolving concept as we figure out more things about the universe). In quantum field theory and general relativity, all Newtonian forces emerge out of more fundamental effects. We can choose to refer to those effects as forces or we can choose to say none of them are forces; the difference becomes a matter of semantics at that point. In GR, the gravitational force emerges out of the dynamics of the motion of bodies in curved spacetime. In the Newtonian limit (objects moving slowly, weak gravitational field, relatively static gravitational field), Einstein's equation reduces as $$ G_{\mu\nu} = \frac{8\pi G}{c^{4}} T_{\mu\nu}\rightarrow \nabla^{2}\phi = 4\pi G\rho_{\text{mass}} $$ and the geodesic equation reduces as $$ \frac{{d}^2x^\mu}{{d}\tau^2}+\Gamma^\mu_{\nu\rho}\frac{{d}x^\nu}{{d}\tau}\frac{{d}x^\rho}{{d}\tau}=0 \rightarrow \frac{d^{2}\vec{x}}{dt^{2}} = -\nabla\phi. $$ If $m$ is the mass of the object being affected by gravity, then we can choose to call $m\cdot -\nabla\phi$ as the force $\vec{F}$ in Newtonian mechanics. By multiplying both sides of the last equation by the mass $m$ of the object, we recover the usual Newtonian formalism. In QFT, three of the four fundamental interactions come down to an exchange of bosons between fermion particles, and the last interaction, gravity, is hypothesized to have a corresponding graviton particle (of course the nature or lack thereof of quantum gravity is currently outside the scope of experimental testing).
{ "domain": "physics.stackexchange", "id": 87835, "tags": "newtonian-mechanics, forces, definition" }
Proving Clique Number of a Regular Graph
Question: I am very new to Graph Theory and I am trying to prove the following statement from a problem set for my class: Prove that if G is a regular graph on n vertices $(n \ge 2)$, then $\omega(G) \in \{1, 2, 3,... \lfloor n / 2 \rfloor, n\}$ I am confused by the part where it places the clique number to be in this set: $\omega(G) \in \{1, 2, 3,... \lfloor n / 2 \rfloor, n\}$. Why can the clique number be only in the first half of this set (or it can be n) and why can't it be anything between $\lfloor n / 2 \rfloor$ and $ n$? How can I go about proving this claim? Any tips would be appreciated. Answer: Let $G$ be a $d$-regular graph on $n$ vertices containing an $a$-clique $A$, and let $B$ denote the other $b:=n-a$ vertices. Suppose that $a>b$. Let $e$ be the number of edges connecting $A$ to $B$. Every vertex in $A$ has $a-1$ edges going to the other vertices in $A$, and so $d-(a-1)$ edges going to vertices in $B$. Hence $e = a(d-(a-1))$. Similarly, every vertex in $B$ has at most $b-1$ edges going to the other vertices in $B$, and so at least $d-(b-1)$ edges going to vertices in $A$. Hence $e \geq b(d-(b-1))$. It follows that $$ b(d-(b-1)) \leq a(d-(a-1)). $$ Subtracting the left-hand side from the right=hand side, we get $$ 0 \leq (ad-a^2+a)-(bd-b^2+b) = (ad-bd)-(a^2-b^2)+(a-b) = (a-b)(d-a-b+1). $$ Since $a > b$, it follows that $d \geq a+b-1 = n-1$. In other words, $G$ is the complete graph. Summarizing, if a regular graph contains a clique on more than half the vertices then it is the complete graph. Therefore the clique number of a regular graph is either at most $\lfloor n/2 \rfloor$ or $n$. Let us now show that the upper bound $\lfloor n/2 \rfloor$ is tight. That is, for every $n$ there exists a regular graph on $n$ vertices whose clique number is $\lfloor n/2 \rfloor$. When $n$ is even, there exist regular graphs on $n$ vertices with clique number $n/2$, for example two disjoint copies of $K_{n/2}$. When $n=4m+1$, there exist regular graphs on $n$ vertices with clique number $2m$: take two disjoint copies of $K_{2m}$, add a new vertex connected to $m$ vertices from each copy of $K_{2m}$, and add a matching between the remaining $m$ vertices of each copy of $K_{2m}$. When $m=1$, this gives the 5-cycle. When $n=4m+3$, the following construction gives a regular graph on $n$ vertices with clique number $2m+1$. Take two disjoint copies of $K_{2m+1}$, say on vertices $x_1,\ldots,x_{2m+1}$ and $y_1,\ldots,y_{2m+1}$. Connect $x_i$ to $y_i$ for all $i$, and connect $x_i$ to $y_{i+1}$ for $i=1,\ldots,m$. Finally, add an additional vertex connected to $x_{m+1},\ldots,x_{2m+1}$ and to $y_1,y_{m+2},\ldots,y_{2m+1}$.
{ "domain": "cs.stackexchange", "id": 11252, "tags": "graphs, clique" }
On which principle does string telephone work?
Question: I am 15 yrs old. and I got the work to find about string telephone and make a project. I search google a lot but can't find the exact principle. What I found is the 1st person having the one cup speak in the cup and another person with his cup on his ear can listen. But I want to know on which sound principle does it work? Please help me fast . In advance thanks all of them who helped me . Answer: The basic principle is the conversion of acoustic vibrations into vibrations that travel through the string. Sound travels faster through solids than through the air, which makes this method better for longer distances. When you speak into your cup, the bottom of the cup vibrates like a diaphragm. These vibrations are converted into longitudinal vibrations which travel along the string, and when they reach the other cup, they cause the bottom of that cup to vibrate like a diaphragm, as acoustic vibrations, audible to the person at the other end. In landline telephones, the acoustic vibrations are instead converted to electrical currents (coded) and then decoded into acoustic vibrations at the other end, which means that they can be used over even longer distances, provided that there is a wired connection for the current to flow. Do keep in mind that string telephones work only when the string is kept taut so that the vibrations are conveyed from one cup to the other.
{ "domain": "physics.stackexchange", "id": 36872, "tags": "acoustics, string" }
Reduce square root to simplest radical form
Question: The function redsqrt(n) prints out the most reduced form of the square root of n. It calls on another function perfsq(n), which produces a list of perfect squares less than or equal to n. from math import sqrt def perfsq(n): # Returns list of perfect squares less than or equal to n l = [1] i, x = 0, 3 while l[-1]+x <= n: l.append(l[i]+x) # Add x to get next perfect square i += 1 x = 2*i+3 return l def redsqrt(n): # Prints most reduced form of square root of n if n < 0: print('Negative input') return if sqrt(n).is_integer(): print(int(sqrt(n))) # Square root is an integer return # Find perfect squares that are factors of n l = [i for i in perfsq(n/2) if n%i==0 and i>1] if len(l)==0: print('\u221A',n) # Square root is irreducible else: a = int(sqrt(max(l))) # Coefficient b = int(n/max(l)) # Argument of the square root print(a,'\u221A',b) # Reduced square root Examples: redsqrt(5.7) √ 5.7 redsqrt(100) 10 redsqrt(2000) 20 √ 5 redsqrt(2040506) 13 √ 12074 Answer: Style Give your code some love and help readers by adding blank lines to separate logical sections of code. It is recommended to use two blank lines before function definition. Space after commas and around operators also help make the code more readable. Some of your comments would be better suited as docstrings and some other barely state what you are doing, which we could read from the code already. Comment should focus on why instead. Lastly using meaningful variable names play a large role in making your algorithm understandable. Avoid abreviations. from math import sqrt def perfect_square(limit): accumulation_list = [1] index, increment = 0, 3 while accumulation_list[-1] + increment <= limit: accumulation_list.append(accumulation_list[iindex] + increment) index += 1 increment = 2 * index + 3 return accumulation_list def reduced_sqrt(n): """Print most reduced form of square root of n""" if n < 0: print('Negative input') return if sqrt(n).is_integer(): print(int(sqrt(n))) return # Find perfect squares that are factors of n factors = [square for square in perfect_square(n/2) if n % square == 0 and square > 1] if len(factors) == 0: print('\u221A', n) # Square root is irreducible else: a = int(sqrt(max(factors))) # Coefficient b = int(n / max(factors)) # Argument of the square root print(a, '\u221A', b) # Reduced square root Unnecessary work Why compute the perfect squares starting from 1 if you are going to filter this value later on? Why compute all these values at all if you're only going to take the max of all factors? You also compute sqrt(n) and max(factor) several time when you could store the result in a variable. Most importantly, you compute the perfect squares rather expensively only to recompute its fairly expensive sqrt when you find one as factor. Instead you could iterate from the nearest integer lower than sqrt(n) down to 1 and see if its square is a factor of n. Once found you have your factor without computing the remaining values. If none can be found, you know that you have an irreducible square root. A first draft could be: from math import sqrt def reduced_sqrt(n): """Print most reduced form of square root of n""" if n < 0: print('Negative input') return root = int(sqrt(n)) found = False for factor_root in range(root, 1, -1): factor = factor_root * factor_root if n % factor == 0: found = True reduced = n // factor if reduced == 1: # n was a perfect square print(factor_root) else: print(factor_root, '\u221A', reduced) break if not found: # irreducible square root print('\u221A', n) Reusability It is bad practice to encore computation and output in the same function as it makes it harder to test and reuse. You should instead return a couple "coefficient/reduced square root" so that the caller can handle the logic of pretty printing the output. This also means you need better handling of you error case. The standard in Python being to raise an exception. Here raise ValueError('negative input') should be enough. But you don't even need that as sqrt of a negative number already raise a ValueError: >>> sqrt(-1) Traceback (most recent call last): File "<stdin>", line 1, in <module> ValueError: math domain error Lastly, returning values let you short circuit your computation and remove the need of the found flag: from math import sqrt def reduced_sqrt(n): """Return most reduced form of square root of n as the couple (coefficient, reduced_form) """ root = int(sqrt(n)) for factor_root in range(root, 1, -1): factor = factor_root * factor_root if n % factor == 0: reduced = n // factor return (factor_root, reduced) return (1, n) def print_reduced_sqrt(n): coefficient, reduced = reduced_sqrt(n) if coefficient == 1: print('\u221A', reduced) elif reduced == 1: print(coefficient) else: print(coefficient, '\u221A', reduced)
{ "domain": "codereview.stackexchange", "id": 22434, "tags": "python, beginner, python-3.x" }
curly braces in sequence motifs
Question: what do curly braces in sequence motifs stand for? e.g. in RTCRYBN{4}ACG what is N{4}? moreover, i notice that in TRANSFAC matrix notation the N{4} is completely omitted: NA Abf1p XX DE RTCRYBN{4}ACG XX P0 A C G T 01 0.500 0.000 0.500 0.000 R 02 0.000 0.000 0.000 1.000 T 03 0.000 1.000 0.000 0.000 C 04 0.500 0.000 0.500 0.000 R 05 0.000 0.500 0.000 0.500 Y 06 0.000 0.333 0.333 0.333 B XX P1 A C G T 01 1.000 0.000 0.000 0.000 A 02 0.000 1.000 0.000 0.000 C 03 0.000 0.000 1.000 0.000 G moreover the MEME suite's transfac2meme completely ignores the second chunk of the matrix after the P1 row. Answer: N is the IUPAC code for any nucleotide, so in DNA sequence an N signifies any one of the four bases could be in that position. The {4} means 4 of the previous character in the pattern, or NNNN. In Perl regular expressions \d{4} means match 4 digits in a row, so the notation is quite similar.
{ "domain": "biology.stackexchange", "id": 5017, "tags": "sequence-analysis, software, nucleic-acids, transcription-factor" }
Simplification of regular expression and conversion into finite automata
Question: This is a beginners question. I and reading the book "Introduction to Computer Theory" by Daniel Cohen. But I end up with confusion regarding simplification of regular expressions and finite automata. I want to create an FA for the regular expression $\qquad \displaystyle (a+b)^* (ab+ba)^+a^+\;.$ My first question is that how we can simplify this expression? Can we we write the middle part as $(ab+ba)(ab+ba)^*$? will this simplify the expression? My second question is whether the automaton given below is equivalent to this regular expression? If not, what is the mistake? This is not a homework but i want to learn this basic example. And please bear me as a beginner. Answer: At the point of writing, your NFA is still a bit off. One version of a correct answer looks like this: So this NFA is design in an ad hoc manner, but there's still some basic organisation to it. You can see that there's three basic bits, the $a,b$ loop on the start state, a forced $ab|ba$, followed by two 2-step loops, then a forced $a$ to the final state, with an $a$ loop. The first loop takes care of the $(a+b)^{\ast}$, the next little clump does the $(ab+ba)^{+}$, then the tail end does the $a^{+}$. The key thing to practice when doing things this way is breaking the RE down into a sequence of smaller REs, that you then stick together. This is essentially a 'casual' version of the systematic method. The systematic way is simple, but a bit fiddly. It's a relatively long explanation as to how to do it systematically, and as you haven't reached that bit in your reading yet, I'll refrain from working through the details here, however just as a quick reference, this explanation is pretty thorough and reasonably well explained.
{ "domain": "cs.stackexchange", "id": 627, "tags": "formal-languages, automata, finite-automata, regular-expressions" }
Turtlebot with roomba 780. Error message: [Distance, angle displacement too big, invalid readings from robot. Distance: 1.84, Angle: -21.02]
Question: I am trying to run a roomba 780 but get an error message. [Distance, angle displacement too big, invalid readings from robot. Distance: 1.84, Angle: -21.02]. Please check that the Create is powered on and that the connector is plugged into the Create. The launch file I am using is: <launch> <node name="turtlebot_node" type="turtlebot_node.py" pkg="turtlebot_node"> <rosparam> port: /dev/ttyUSB0 publish_tf: True robot_type: roomba has_gyro: False </rosparam> </node> </launch> After running the roomba launch file i get the following error message below. Also when I run this launch file I can see that I actually am communicating with my roomba as the roomba turns of, and will not start again until i lift it of the floor. It is as it goes into some kind of protection mode?! I use a irobot usb to din cable. turtlebot@turtlebot:~$ roslaunch rob.launch ... logging to /home/turtlebot/.ros/log/bc774c0e-15f0-11e2-99a6-00215c5e4e7b/roslaunch-turtlebot-4549.log Checking log directory for disk usage. This may take awhile. Press Ctrl-C to interrupt Done checking log file disk usage. Usage is <1GB. started roslaunch server http://turtlebot:53600/ SUMMARY ======== PARAMETERS * /rosdistro * /rosversion * /turtlebot_node/has_gyro * /turtlebot_node/port * /turtlebot_node/publish_tf * /turtlebot_node/robot_type NODES / turtlebot_node (turtlebot_node/turtlebot_node.py) auto-starting new master Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[master]: started with pid [4566] ROS_MASTER_URI=http://localhost:11311 setting /run_id to bc774c0e-15f0-11e2-99a6-00215c5e4e7b Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[rosout-1]: started with pid [4580] started core service [/rosout] Exception AttributeError: AttributeError("'_DummyThread' object has no attribute '_Thread__block'",) in <module 'threading' from '/usr/lib/python2.7/threading.pyc'> ignored process[turtlebot_node-2]: started with pid [4592] /opt/ros/fuerte/stacks/turtlebot/turtlebot_node/nodes/turtlebot_node.py:54: UserWarning: roslib.rosenv is deprecated, please use rospkg or rosgraph.rosenv import roslib.rosenv [ERROR] [WallTime: 1350213455.602533] Failed to contact device with error: [Distance, angle displacement too big, invalid readings from robot. Distance: 1.84, Angle: -21.02]. Please check that the Create is powered on and that the connector is plugged into the Create. [ERROR] [WallTime: 1350213462.009986] Failed to contact device with error: [Distance, angle displacement too big, invalid readings from robot. Distance: 1.84, Angle: -21.02]. Please check that the Create is powered on and that the connector is plugged into the Create. ^C[turtlebot_node-2] killing on exit [rosout-1] killing on exit [master] killing on exit shutting down processing monitor... ... shutting down processing monitor complete done I know that the 780 should use a baud rate of 115200 which is the same as for the 500 models. Anyone seen this error with the angle message before? Thanks Originally posted by krst on ROS Answers with karma: 11 on 2012-10-16 Post score: 0 Original comments Comment by krst on 2012-10-17: Well i am not sure if i understand that answer. Or is that a spam answer? Comment by krst on 2012-10-17: Well i am not sure if i understand that answer. Or is that a spam answer? Comment by Lorenz on 2012-10-17: Probably it was spam. Deleting it... Comment by krst on 2012-10-17: Yes I will try to do that. I have from some other forums that the communication should be similar, but then still similar is not exactly as being the same! The information is not really open as I have seen it but maybe Irobot will want to shear it. I will post it then here. I also have made a kinect mod so you can use it from one usb cable withouth a power adapter (found from another forum) so hope i can contribute something back also : -) Answer: The problem is probably that the communication protocol of the roomba 780 is slightly different from the 500 series. You will have to debug the problems manually. I would first try to get the protocol specifications of the 500 series and the 780 from iRobot and compare them. Then you might need to patch the turtlebot node to add support for your robot. Patches are of course welcome. Originally posted by Lorenz with karma: 22731 on 2012-10-17 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 11401, "tags": "ros, roomba" }
Is there a way to prove that a bound state wavefunction can always be chosen real for an arbitrary potential in Quantum Mechanics?
Question: As we can prove many things that always (at least in introductory quantum mechanical problems) apply using an arbitrary potential (like that $E>V_{\rm min}$ or else the solutions are non-normalizable and superpositions of them can't produce normalizable wavefunctions), is there a way to generally prove for an arbitrary potential that bound states always correspond to real functions? Answer: I) Yes, the time independent Schrödinger equation (TISE) of the form $$\left(-\frac{\hbar^2}{2m} {\bf \nabla}^2 +V({\bf r}) -E \right) \psi({\bf r}) ~=~0 $$ is $\mathbb{C}$-linear and invariant$^1$ under complex conjugation. So if the wave function $\psi$ is a solution with finite square-norm, then so will $\psi^{\ast}$, $$\frac{\psi+\psi^{\ast}}{2}\quad\text{and}\quad \frac{\psi-\psi^{\ast}}{2i}$$ be. The two latter are real solutions, and at least one of them has non-zero square-norm if $\psi$ has non-zero square-norm. Hence we can always choose a normalizable solution to be real. See also Problem 2.1b in Griffiths, Intro to QM, and this related Phys.SE post. II) Note that the same argument does not apply to scattering states in the continuous spectrum, since the boundary conditions at infinity of $\psi^{\ast}$ could be off. Also it does not apply to the time dependent Schrödinger equation (TDSE). -- $^1$ Here we have implicitly used that the eigenvalue $E\in\mathbb{R}$ is real because the Hamiltonian $H$ is self-adjoint. Note that self-adjointness alone is not enough. E.g. $$H~=~\frac{1}{2m}\left(\frac{\hbar}{i} {\bf \nabla}-q {\bf A}({\bf r}) \right)^2 +V({\bf r})$$ is self-adjoint, but the corresponding TISE is not invariant under complex conjugation.
{ "domain": "physics.stackexchange", "id": 29701, "tags": "quantum-mechanics, wavefunction, symmetry, schroedinger-equation, complex-numbers" }
Returning a cached contact by ID
Question: This class returns a cached contact by ID. The two cached functions here are doing the same thing. The only difference being an early return statement or using else and having only one return statement. CacheHelper class is a static class that is a wrapper around .Net's MemoryCache class. It's a dll reference so I don't have the source code for it. I personally think the function with else is more readable. However, when I'm reading on the web, it says I need to avoid using else whenever possible. Also, there is a reason I'm checking if cache exists first instead of directly getting it. In some cases, null Contact objects are added to the cache, so if I do a get directly, the logic won't be right. Get would return an empty object if the cache didn't exist or even if a null object is stored in the cache. I understand that storing null objects does not make much sense. I'm the in process of refactoring it, so for now please focus only on the question about whether or not to use else. public class ContactHelper { public Contact GetContact(int ID) { return ContactService.GetContact(ID); } public Contact GetCachedContact(int ID) { string Key = ID; Contact Con = default(Contact); if (CacheHelper.Exists(Key)) { Con = (Contact)CacheHelper.Get(Key); } else { Con = GetContact(ID); CacheHelper.Add(Key, Con); } return Con; } public Contact GetCachedContact_2(int ID) { String Key = ID if (CacheHelper.Exists(Key)){ return (Contact)CacheHelper.Get(Key); } Con = GetContact(ID); CacheHelper.Add(Key, Con); return Con; } } Answer: It's debatable on which is more readable. I've worked in places where the common opinion is that an explicit else is more readable, and I've worked in places that consider the terser the code, the better. If I had to choose, I'd personally go for the one without the else, because it's less lines of code to maintain, but my general advice is that within the overall picture of developing your app, it probably doesn't matter that much. If you decide that the way with the else is more readable, I wouldn't initialize Con at all, because it gets overwritten in either case.
{ "domain": "codereview.stackexchange", "id": 22398, "tags": "c#" }
Why doesn't decision tree work in case find minimum
Question: When we would like to prove lower bound comparison algorirthm, we often use decision tree, for example sorting by comparisons. So let's consider find minimum in array $a[1..n]$ by comparison. Lower bound for that problem is commonly known - $n-1$ comparisons. However, how to use decison tree in order to prove lower bound ? It seems that it might be (analogously to sorting problem): $n$ - number of possible results (similiar to $n!$ in sorting). So the number of comparison (minimal height of tree) is $\log_2(n) < n - 1$. Believe me, I have tried to understand it, but I can't. Answer: There is more than one technique to prove a lower bound on the depth of decision trees. One technique you have seen is bounding the depth of a decision tree by the logarithm of the number of its leaves. As you comment, this technique doesn't give the correct lower bound for the case of computing the minimum. However, we can lower bound the depth of a decision tree computing minimum directly. Take any reachable leaf $\ell$ in a decision tree for minimum, and consider the entire path $p$ leading to it. Construct a graph whose vertices are the input elements $x_1,\ldots,x_n$, two elements $(x_i,x_j)$ being connected if they are compared in $p$. I claim that this graph must be connected, and I show this below. A connected graph on $n$ vertices must have at least $n-1$ edges, and this shows that the leaf must have depth at least $n-1$. In particular, the depth of the decision tree is at least $n-1$. Now to the proof of the claim. Suppose that the graph contains more than one connected component, say it contains the components $C_1,\ldots,C_m$. Since the leaf $\ell$ is reachable, there is a linear ordering compatible with the results of the comparisons in each connected component $C_i$. We can put together these linear orderings into a linear ordering of all elements in $m!$ different ways. In particular, we can have $C_1 > C_2 > \cdots > C_m$, and we can have $C_1 < C_2 < \cdots < C_m$. This means that it is consistent with the comparisons in $p$ that the minimum is in $C_m$, and it is also consistent that the minimum is in $C_1$. Hence whatever the tree outputs at $\ell$ will be wrong in at least one linear ordering consistent with the comparisons in $p$, showing that the tree doesn't compute the minimum.
{ "domain": "cs.stackexchange", "id": 5456, "tags": "lower-bounds" }
Duration of rotating and sliding down an incline
Question: A cylinder and a square cube, both with the same mass, are placed on top of an incline with friction and then let go. The cylinder rolls down without sliding, while the cube slides. They arrive at the bottom at the same time, is this correct? And it could be derived from the formula $$\frac{1}{2}(g\sin\theta-g\mu \cos\theta )t^2=x$$ with $x$ being the incline length, $\theta$ the slope angle, $\mu$ the friction constant? Answer: The time taken by the cylinder and the cube will not be same, as in this particular case they have different accelarations. Since the cylinder is rolling its accelaration of center of mass will not be the one you gave in your equation. The fact that its rolling on the incline plane, will lead to a different acceleration, and i am letting you figure out what it is. (Just as a hint i am adding a picture) On the other hand, the time you found out through your equation is the time taken for cube to come down the incline, since its slipping.
{ "domain": "physics.stackexchange", "id": 57589, "tags": "homework-and-exercises, newtonian-mechanics, rotational-dynamics" }
Are Huffman codes self-synchronizing?
Question: A code is (statistically) self-synchronizing if, given that the transmitted string is long enough, the receiver is guaranteed to eventually synchronize with the sender, even if bit flips or slips have occurred. Do Huffman codes have this property in general? In case not, is there a criterion for testing if a Huffman code is self-synchronizing or, equivalently, is there a modified construction of a Huffman code which guarantees self-synchronization? Answer: Assume 256 symbols, each with probability 1/256. The Huffman code words will be all 256 possible eight-bit sequences, and there is no synchronisation at all. To find if a Huffman code is self-synchronising: Let S = {} For each code c: For 1 ≤ m < length (c) Let s be the last m bits of c As long as s has a code c' as prefix Remove prefix c' from s If s is not empty and not in S Add s to S. Loop: If S is empty the the code is self-synchronising. Let T = {} For each s' in S, and code c: Let s be s' concatenated with c As long as s has a code c' as prefix Remove prefix c' from s If s is not empty and not in T Add s to T. If T = S then the code is not self-synchronising Let S = T and loop.
{ "domain": "cs.stackexchange", "id": 13728, "tags": "coding-theory, huffman-coding" }
Why is $\frac{d^2x^{\mu}}{d\lambda^2}=0$ not a tensorial equation?
Question: In flat space, the motion of freely falling particles given by the parametrized path $x^\mu(\lambda)$ is given by the geodesic equation $$\frac{d^2x^{\mu}}{d\lambda^2}=0.$$ Why is this not a tensorial equation? From my understanding, $\frac{dx^\mu}{d\lambda}$ is a $(1,0)$ tensor $A^\mu$ and $\frac{dA^\mu}{d\lambda}$ is taking the difference of a tensor, i.e. $dA^\mu$ and dividing with a scalar $d\lambda.$ This means that $\frac{dA^\mu}{d\lambda} \equiv\frac{dx^\mu}{d\lambda} $ is a tensor. Answer: $A^\mu(\lambda) \equiv \frac{dx^\mu}{d\lambda}$ are the components of a $(1,0)$-tensor $\mathbf A = A^\mu \left.\frac{\partial}{\partial x^\mu}\right|_{x(\lambda)}$ attached to the point $x(\lambda)$. The derivative $\frac{d}{d\lambda} \mathbf A$ involves computing the difference between two tensors - $\mathbf A(\lambda + d\lambda)$ and $\mathbf A(\lambda)$ - which are attached to different points, namely $x(\lambda+d\lambda)$ and $x(\lambda)$. This ability is provided by the connection $\Gamma$. Explicitly, we have that $$\mathbf A(\lambda+d\lambda) \simeq \mathbf A(\lambda) + \left(\frac{dA^\mu}{d\lambda} + \Gamma^\mu_{\nu \alpha} \frac{dA^\nu}{d\lambda} \frac{dA^\alpha}{d\lambda}\right) d\lambda\left.\frac{\partial}{\partial x^\mu}\right|_{x(\lambda)}$$ and so $$\frac{d}{d\lambda} \mathbf A = \left(\frac{d^2 x^\mu}{d\lambda^2} + \Gamma^\mu_{\nu\alpha} \frac{dx^\nu}{d\lambda} \frac{dx^\alpha}{d\lambda}\right) \left.\frac{\partial}{\partial x^\mu}\right|_{x(\lambda)}$$ In cartesian coordinates, the connection components $\Gamma^\mu_{\nu\alpha}$ all vanish and so $\frac{d^2 x^\mu}{d\lambda^2}$ transforms like a tensor provided that you remain in Cartesian coordinates (i.e. you only perform Lorentz transformations), but if you want an object which transforms properly under general coordinate transformations then you'll need to include the $\Gamma$ term.
{ "domain": "physics.stackexchange", "id": 82674, "tags": "general-relativity, differential-geometry, tensor-calculus, differentiation, covariance" }
What is sample and feature
Question: I'm reading Scikit-learn and I can't understand sample and feature. (n_samples, n_features) Can anybody describe those by example? Answer: [x[1,2,3,4], x2[1,2,2,3], x[2,3,2,1]] The data above has 4 features. We can gives those features labels with a header. We'll just call them feature 1, feature 2, feature 3. For the first entry, feature 1 has a value of 1 and feature 2 has a value of 2 and so on. A sample, is a subset of data taken from your dataset. x[1,2,3,4] is a single sample of the dataset. Whatever you are trying to do with Scikit-learn wants to know how many features you have, my example has 4 features (or columns).
{ "domain": "datascience.stackexchange", "id": 5688, "tags": "machine-learning, scikit-learn" }
If the time-ordering symbol does not respect commutation relations, why is the expansion of the Dyson series valid?
Question: A standard result in QFT is the expression of the interacting theory correlation functions in terms of field operators in the interaction picture and the free theory vacuum: $$\langle\Omega|\mathcal T\{\phi(x)\phi(y)\}|\Omega\rangle=\lim_{T\rightarrow \infty(1-i\epsilon)}\frac{\langle 0|\mathcal T\left\{\phi_I(x)\phi_I(y)\exp[-i\int^T_{-T} dt H_I(t)]\right\}|0\rangle}{\langle0|\mathcal T\left\{\exp[-i\int^T_{-T} dt H_I(t)]\right\}|0\rangle} \tag{4.31}.$$ (I have numbered the equation as this is in reference to Peskin & Schroeder) The utility of this expression is obvious, but what is strange to me is that in order to obtain the numerator on the RHS above we take the previous step in the derivation: $$\langle\Omega|\mathcal \phi(x)\phi(y)|\Omega\rangle=\lim_{T\rightarrow \infty(1-i\epsilon)}\frac{\langle 0|U(T,x^0)\phi_I(x)U(x^0,y^0)\phi_I(y)U(y^0,-T)|0\rangle}{\langle0|\exp[-i\int^T_{-T} dt H_I(t)]|0\rangle} \tag{i}$$ and apply the time-ordering operator to both sides, inside of which the time evolution operators can be moved past each other in the numerator, which leaves us with the same exponential as in the denominator. My problem is that we then take 4.31 and treat it as if it is equal to (i), so that we can use a Taylor expansion and Wick's theorem etc. to calculate the interacting correlation functions. However, we appear to have made explicit use of the fact that everything commutes under the $\mathcal T\{\}$ "operator" to get there. Which in reality they do not. I have, while writing this thought of an idea. Since the $U$ operators in (i) are integrals over $H_I(t)$ which (at least in $\phi^4$ theory) is simply $\int d^3z\text{ }\phi_I^4(z)$, we could commute this operator through the field operators $\phi(x)$ and $\phi(y)$, however this seems to require they be spacelike separated, which they will in general not be. So I am again at a loss. Answer: The main point is that the time ordering procedure ${\cal T}[~]$ does not take operators to operators, but symbols/functions to operators. This is similar to what happens with the normal ordering procedure $:~:$, cf. e.g. this, this & this Phys.SE posts. Example: $$ \begin{align}{\cal T}[e^{A(t_1)+B(t_2)}]~=~&{\cal T}[e^{A(t_1)}e^{B(t_2)}]\cr ~=~&\theta(t_1\!-\!t_2)e^{\hat{A}(t_1)}e^{\hat{B}(t_2)}\cr ~+~&\theta(t_2\!-\!t_1)e^{\hat{B}(t_2)}e^{\hat{A}(t_1)}.\end{align} $$
{ "domain": "physics.stackexchange", "id": 90149, "tags": "quantum-field-theory, hamiltonian, perturbation-theory, interactions, correlation-functions" }
How time-dependent Schrödinger equation can describe a nonlinear phenomena where it is classified as linear equation?
Question: Many nonlinear phenomena are well described by QM (e.g. multiphoton ionization, high harmonic generation etc). The time-dependent Schrodinger equation is a linear equation, how a non-linear equation could describe such nonlinear phenomena ?. Answer: In classical physics, we don't usually bother distinguishing between two things that must be distinguished in quantum physics. In classical physics, all observables commute with each other, so we can (and do) always take the state to be an eigenstate of all of the observables. For this reason, we don't really need to bother distinguishing between the state and the observables. But they are logically distinct: the state is what tells us the values of the observables. Observables represent the kinds of things we can measure, and the state tells us what the results of those measurements will be. In quantum physics, this distinction is essential, because most observables do not commute with each other, so they cannot all have predictable measurement outcomes. The state tells us what we will get, statistically, when we measure any given observable. Quantum physics is linear in the state and (typically) nonlinear in the equations that govern relationships between observables. Here's an example. In relativistic quantum electrodynamics (which implicitly accounts for the phenomena listed in the question), the Schrödinger equation is $$ \newcommand{\ra}{\rangle} \newcommand{\la}{\langle} \newcommand{\opsi}{{\overline\psi}} \newcommand{\pl}{{\partial}} \newcommand{\bfE}{\mathbf{E}} \newcommand{\bfB}{\mathbf{B}} i\frac{d}{dt}|\Psi\ra = H|\Psi\ra \tag{1} $$ and the Hamiltonian is $$ H \sim \int d^3x\ H(x) \tag{2} $$ with \begin{align} H(x) &= \opsi(x)\gamma^ki\nabla_k\psi(x) + m\opsi(x)\psi(x) \\ & +\frac{\bfE^2(x)+\bfB^2(x)}{2} +e\opsi(x)\gamma^\mu A_\mu\psi(x). \tag{3} \end{align} (schematically, without carefully checking the coefficients and without messing with gauge-fixing), where $\psi$ is the spinor field (should be one for each species of charged particle, but I only included one here) and where $\bfE,\bfB,A$ are different representations of (components of) the electromagnetic field. The Schrödinger equation (1) is linear in the state-vector $|\Psi\ra$, but the Hamiltonian is a moderately complicated combination of the field operators. To see how this relates to non-linear field equations in classical electrodynamics, we need to work in the Heisenberg picture instead of the Schrödinger picture. In the Heisenberg picture, the state $|\Psi\ra$ is independent of time, and all time-dependence is carried by the field operators (which are used to construct observables associated with regions of spacetime). The Heisenberg equations of motion describe the time-dependence of the field operators. In this case, these equations are (schematically again) $$ \gamma^\mu(i\pl_\mu + eA_\mu)\psi=0 \hskip2cm \pl_\mu F^{\mu\nu}=e\opsi\gamma^\mu\psi. \tag{4} $$ The first equation is the (operator version of the) Dirac equation, and the second equation is the (operator version of) Maxwell's equations. Both are nonlinear in the field operators: the Dirac equation is nonlinear because of the $A\psi$ term, and Maxwell's equations are nonlinear because of the $\opsi\psi$ term. With allowance for my carelessness with the coefficients, equations (1)-(3) are equivalent to equations (4). The former are linear in the state-vector, and the latter are nonliner in the field operators. The nonlinear terms in equation (4) come from the non-quadratic term in the Hamiltonian (2)-(3).
{ "domain": "physics.stackexchange", "id": 66595, "tags": "quantum-mechanics" }
UDOO sicktoolbox
Question: Hello, i am running ubuntu 13.04 on UDOO and ros hydro. I have been trying to install sicktoolbox_wrapper on udoo since yesterday by following the "using sicklms" guide but unfortunately i get stuck at the first step, when i try to do "rosdep update" i get an error which says "AssertionError: unable to handle 'index' format version '2' please update rosdistro. But when i try to update it tells me "python-rosdistro is already the newest version".Any help would be greatly appreciated. Thank you. Originally posted by asusrog on ROS Answers with karma: 63 on 2014-08-21 Post score: 0 Original comments Comment by ahendrix on 2014-08-22: Which apt repository are you using? Answer: According to the ARM Hydro status page, the sick packages should be installable through apt (they're currently marked as outdated because I'm working on updated builds). Try: sudo apt-get install ros-hydro-sicktoolbox-wrapper UPDATE The UDDO installation instructions were using a very out-of-date ARM repository (I've updated them now); please update your /etc/apt/sources.list.d/ros-latest.list to: deb http://packages.namniart.com/repos/ros raring main Originally posted by ahendrix with karma: 47576 on 2014-08-22 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by asusrog on 2014-08-23: yes i was using the old repository.. I have successfully installed the package now.. but when i try to run it.. "rosrun sicktoolbox_wrapper sicklms" nothing happens, the terminal freezes.. Comment by ahendrix on 2014-08-23: For some nodes, no output and no prompt is normal. I don't have much experience with the sick driver, so I'm not sure what it's supposed to do. Do you have an x86 machine that you can compare it with? Comment by asusrog on 2014-08-24: yes actually i use it sometimes on my laptop as well and it works right away.. Even when the sensor isnt connected.. i get this error- Unable to open serial port [ERROR] [1408884066.958296001]: Initialize failed! are you using the correct device path?" .. Comment by asusrog on 2014-08-24: I guess it is because m unable to see my sensor as a /dev/ttyACM0 or /dev/ttyUSB0 device on UDOO.. I've tried compiling a kernel with cdc_acm module enabled but with no luck.. It still doesnt do anything.. :( Comment by ahendrix on 2014-08-24: It's quite possible that the sick driver doesn't run well on ARM; you're probably one of the first people to try it. You should open a ticket and work with the package maintainers to troubleshoot your problem. Comment by asusrog on 2014-08-24: I just connected my usb memory stick, i can access the contents. It is being mounted properly but UDOO is not creating a device node for it as well. Could it be a kernel bug or something? or m i missing something here. Comment by asusrog on 2014-08-24: M sorry, i know this is not the place for such questions but i figured you could help me with this. i've tried most of the relevant forums but no replies whatsoever. The kernel version is 3.0.35 Comment by ahendrix on 2014-08-25: The issue on sicktoolbox_wrapper is https://github.com/ros-drivers/sicktoolbox_wrapper/issues/4 ; let's keep the discussion there now that we know this is a bug.
{ "domain": "robotics.stackexchange", "id": 19148, "tags": "ros, laser, sick, sicktoolbox-wrapper" }
How are weights represented in a convolution neural network?
Question: I have been trying to develop a convolution neural network following some guides online. However, most guides I have encountered gloss over an important detail, which is how to programmatically represent the weights in a CNN. As far as I understand, in a "regular" neural network, the weight of a connection is a numerical value, which is adjusted in order to reduce the error; then back-propagation is used to further update the weights, reducing thus the error, etc. However, in a CNN, the input is an array of numbers (the image), and a subset of those (the filter) to calculate the mean error, by multiplying the filter pixels by the original pixels. So, is there a weight neuron for each filter (kernel or feature map) of the image? Or is a single weight neuron represented by the sum of all the mean error's calculated from convolving the filter over the receptive field, such that you have one value, in the end, that is the total error for the entire image? Answer: In convolutional layers the weights are represented as the multiplicative factor of the filters. For example, if we have the input 2D matrix in green with the convolution filter Each matrix element in the convolution filter is the weights that are being trained. These weights will impact the extracted convolved features as Based on the resulting features, we then get the predicted outputs and we can use backpropagation to train the weights in the convolution filter as you can see here.
{ "domain": "datascience.stackexchange", "id": 11686, "tags": "neural-network, convolutional-neural-network" }
Generators in Javascript; Project Euler #2
Question: I'm trying to learn javascript using project euler and I decided to try to force myself to learn about generators in javascript using problem number 2 which asks us to: Find the sum of the even-valued terms in the fibonacci (f(n)) sequence such that f(n) < 4M , beginning with n=1 is 1, n=2 is 2, n=3 is 3 and so on My code below presently works but I'm hoping to get some input as I began learning javascript this week. I'm coming from a python background and any input that keeps that in mind would be heavily appreciated! function* fib_gen() { var current = a = b = 1; while (true) { current = b; yield current; b = a + b; a = current; } } function solution() { sequence = fib_gen(); even_fibs_total = 0; cur = sequence.next().value; while (cur < 4000000) { if (cur % 2 == 0) even_fibs_total += cur; cur = sequence.next().value; } return even_fibs_total } var time_pre = performance.now() document.write(solution()) document.write('<br>') var time_post = performance.now() document.write('completed in ' + Math.round((time_post - time_pre)*100)/100 + ' seconds') Answer: 1) You have a few undeclared variables which means they become global. You want to use var (or let or const) always. Compare this: var current = a = b = 1; // a and b are globals To: var current, a, b; // all variables are local current = a = b = 1; Same here: sequence = fib_gen(); // not declared even_fibs_total = 0; // not declared cur = sequence.next().value; // not declared Compare to: var sequence = fib_gen(); var even_fibs_total = 0; var cur = sequence.next().value; 2) Conventions: Name your variables with camelCase instead snake_case. Use === instead of == unless you know you really really need == (tip: you probably don't). Always write if statements with braces, even if one line, for better readability and refactoring. 3) Generators are iterators, and iterators can be iterated with the for..of loop. If your environment supports generators it probably supports for..of: function solution() { var evenFibsTotal = 0; for (var cur of fibGen()) { if (cur >= 4000000) { break; } if (cur % 2 === 0) { evenFibsTotal += cur; } } return evenFibsTotal; } 4) If you have ES6 support (or compilation via Babel) you can use destructuring assignment to simplify the fibonnaci function: function* fibGen() { var a = 1; var b = 1; while (true) { yield b; [a, b] = [b, a + b]; } }
{ "domain": "codereview.stackexchange", "id": 19407, "tags": "javascript, programming-challenge, generator" }
How to identify a landslide area early?
Question: We know that this natural disaster can cause a big damage. I have heard that we can identify such areas by some geographical signs, such as curved trees. Do we have any stronger detectable signs which we can observe easily? Answer: Curved trees are a sure sign of movement of surface. Special geomorphic shapes are also present in a slowly developing landslide. At the top of the landslide cracks can start to show, at the foot you can find "bumps" in the soil. Water will have difficulties to drain at the foot of the landslide. Here are a few examples of literature that should be something to start with: http://ny.water.usgs.gov/pubs/jrn/ny0155/jrn02-r20500g.pdf http://www.ags-hk.org/notes/09/Geomorphology_and_Landslide_Hazard_Models_Steve_Parry.pdf http://www.adpc.net/casita/pdf/01-Geomorphic-mapping.pdf
{ "domain": "earthscience.stackexchange", "id": 843, "tags": "soil, land-surface" }
Why do we neglect $\Delta t^2(\frac{d\vec{r}}{dt}\frac{d\vec{\hat{r}}}{dt})$ at Taylor Expansion?
Question: I'm just started to Ankara University Physics Department two weeks ago. I have missed my 2 hours of PHY105 course that is the last week Wednesdey. The subject that i missed was Derivatives of Vectors. I'm trying to fill the gap. I get the main idea about position vectors, the change of position over time of a particle, finding the velocity of a particle with derivation. But at the end of the concept, the book is used samething for the other expression of position vector ($\vec r=r\hat{\vec{r}}$). Let me show you. ** At my book, We can write $\vec r(t)$ as $r(t)$ magnitude and $\hat{\vec{r}}(t)$ unit vector form. $\vec r(t)=r(t)\hat{\vec{r}}(t)$ $\frac{d\vec r}{dt}=\frac{d}{dt}[r(t)\hat{\vec{r}}(t)]=\lim_{\Delta t\to 0}\frac{r(t+\Delta t)\hat{\vec{r}}(t+\Delta t)-r(t)\hat{\vec{r}}(t)}{\Delta t}$ If we open the series with Taylor Expansion and take the first two terms, numerator of the fraction is, $[r(t)+\frac{dr}{dt}\Delta t][\hat{\vec{r}}(t)+\frac{d\hat{\vec{r}}}{dt}\Delta t]-r(t)\hat{\vec{r}}(t)$ $=\Delta t(\frac{dr}{dt}\hat{\vec{r}}+r\frac{d\hat{\vec{r}}}{dt})+{\Delta t}^2(\frac{dr}{dt}\frac{d\hat{\vec{r}}}{dt})$ At here, if we neglect the second term when $\Delta t\to 0$, $\vec{V}=\frac{d\vec{r}}{dt}=\frac{d\vec{r}}{dt}\hat{\vec{r}}+r\frac{d\hat{\vec{r}}}{dt}$ remains. ** Questions 1) Why do we use Taylor Expansion, what does it do? 2) When $\Delta t\to 0$ why do we only neglect the second term and why not the first term? Answer: 1) Why do we use Taylor Expansion, what does it do? That is not a Taylor's series expansion. That is just simple multiplication into a quadratic, i.e. $(x+a)(x+b)=x^2 + ax + bx + ab$, but with the "$ab$" part subtracted out. Try applying that to your $\left[ r(t) + \Delta t \frac{dr}{dt}\right]\left[ \hat {\vec r} (t) + \Delta t \frac{d\hat {\vec r}}{dt}\right] - r(t) \hat {\vec r} (t)$. 2) When $\Delta t \to 0$ why do we only neglect the second term and why not the first term? How fast does ${\Delta t}^2$ shrink as $\Delta t \to 0$ compared to ${\Delta t}^2$?
{ "domain": "physics.stackexchange", "id": 61544, "tags": "kinematics, vectors, differentiation, approximations" }
Why is a fixed $j$ eigenspace irreducible?
Question: In his development of angular momentum, Ballentine writes: The $(2j + 1)$-dimensional space spanned by the set of vectors $\{|j, m\rangle\}$, for fixed $j$ and all $m$ in the range $(−j \leq m \leq j)$, is an invariant irreducible subspace under rotations. To say that the subspace is invariant means that a vector within it remains within it after rotation. This is so because no other values of $j$ are introduced into the linear combination of (7.71). To say that the subspace is irreducible means that it contains no smaller invariant subspaces. Proof of irreducibility is left as an exercise for the reader. I am confused about how to supply said proof. The invariance follows because any rotation operator commutes with $J^2$, and so rotations do not map the $|j,m \rangle$ out of the fixed $j$ eigenspace. I am struggling to prove irreducibility. Obviously, the structure of the proof should be something like "Take any subspace spanned by some strict subset of the $|j,m \rangle$. Then observe that there is a rotation operator which maps an element of that subspace outside of the subspace". Can anyone supply a hint (or just the quick proof) as to how I should pick said rotation operator? Answer: Take your rotation $R(\Omega)$ and write it in factorized form $$ R(\Omega)=e^{\xi J_+}e^{\eta J_z}e^{\zeta J_-} $$ It is always possible to do this and in fact the parameters $\xi,\eta$ and $\zeta$ can be found using any faithful irrep. (People usually use the $2\times 2$ 'cuz it's the simplest for this purpose.) Here, $J_\pm$ and $J_z$ would be the $(2j+1)\times (2j+1)$ matrix representations of the appropriate operators. Then, expand $e^{\zeta J_-}=1+\zeta J_- +\frac12 \zeta^2 J_-^2+\ldots$ to show that, starting from $\vert jm\rangle$, you will reach every $\vert jm'\rangle$ with $m'\le m$, and in particular $\vert j,-j\rangle$. $J_z$ does not change $m'$ so $e^{\eta J_z}$ will not generate any new states. Finally, expand $e^{\xi J_+}$ acting on $$e^{\eta J_z}e^{\zeta J_-}\vert jm\rangle\, . \tag{1} $$ Then certainly you will eventually generate all the states $\vert jm''\rangle$ with $m''>m$ if only because, after enough applications, $e^{\xi J_+}\vert j,-j\rangle$ will produce something proportional to $\vert jm''\rangle$ with $m''>m$, and you know Eq.(1) must contain $\vert j,-j\rangle$.
{ "domain": "physics.stackexchange", "id": 96641, "tags": "quantum-mechanics, hilbert-space, angular-momentum, representation-theory" }
Linked list implementation in cpp
Question: I need some review for this CPP Linked list implementation. This is my first time implementing such a data structure in CPP. Thanks in advance. /* ** ** author: Omar_Hafez ** created: 12/05/2022 05:15:38 PM ** */ #include <bits/stdc++.h> using namespace std; template<class T> struct Node { Node<T> *next; T data; }; template<class T> class Linked_List { private: Node<T> *head = NULL; int count = 0; public: bool is_empty() { return (head == NULL); } void push_front(T value) { Node<T> *new_node = new Node<T>; new_node -> data = value; new_node -> next = head; head = new_node; count++; } void push_back(T value) { Node<T> *new_node = new Node<T>; new_node -> data = value; new_node -> next = NULL; if(is_empty()) head = new_node; else { Node<T> *tmp = head; while(tmp -> next != NULL) { tmp = tmp -> next; } tmp -> next = new_node; } count++; } int push_after(T value, T after) { if(is_empty()) return 0; Node<T> *tmp = head; while(tmp != NULL && (tmp -> data) != after) { tmp = tmp -> next; } if(tmp == NULL) return -1; Node<T> *new_node = new Node<T>; new_node -> data = value; new_node -> next = tmp -> next; tmp -> next = new_node; count++; return 1; } int delete_front() { if(is_empty()) return 0; if(count == 1) { count--; head = NULL; return 1; } head = head -> next; count--; return 1; } int delete_back() { if(is_empty()) return 0; if(count == 1) { count--; head = NULL; return 1; } Node<T> *tmp = head; while(tmp -> next -> next != NULL) tmp = tmp -> next; tmp -> next = NULL; count--; return 1; } int erase(int value) { if(is_empty()) return 0; int cnt = 0; while(!is_empty() && head -> data == value) { cnt++; delete_front(); } Node<T> *tmp = head; while(tmp -> next != NULL) { if(tmp -> next -> data == value) { tmp -> next = tmp -> next -> next; cnt++; count--; } else tmp = tmp -> next; } if(tmp -> data == value) { delete_back(); cnt++; } return cnt; } void print_values() { Node<T> *tmp = head; while(tmp != NULL) { cout << (tmp -> data) << " "; tmp = tmp -> next; } cout << endl; } int search(T value) { Node<T> *tmp = head; while(tmp != NULL) { if(tmp -> data == value) return value; tmp = tmp -> next; } return -1; } int size() { return count; } }; Answer: Overview This is a singly linked list. Which is fine. But it is relatively trivial to implement a doubly linked list (next/prev link in each node). Also by using a doubly linked list and a sentinel (look up the sentinel pattern) you can remove the need to check for nullptr (which makes the code easier to read). You don't release the memory you allocate with new. For every call to new there should be an matching call to delete. This means your object should have a destructor. You don't obey the rule of three or five. If your object owns (owns ⇒ creates and destroys) a resource (in this case head) then the compiler-implemented copy constructor and assignment operator will not work (as you expect) and you need to implement your own. Code-Review Always good. /* ** ** author: Omar_Hafez ** created: 12/05/2022 05:15:38 PM ** */ Not sure if you need a time :-) but you may want to add (C) 2022 if this is auto generated. Note: By posting on this site you are licensing under CC (see bottom of page for details). Never do: This: #include <bits/stdc++.h> That include is non-standard and your code is going to break and not compile at some point. Or this: using namespace std; This is going to get you in trouble in the long run. You should read the article Why is "using namespace std;" considered bad practice? second answer is the best in my opinion. Rather than doing this, prefix standard library types objects with std::. It's only 5 characters and the name std was designed to be short for that reason. This is good. But nobody should use this information (it is an internal detail to the class Linked_List) so make it a private member of the class so only it can use it. template<class T> struct Node { Node<T> *next; T data; }; It's not wrong. Node<T> *head = NULL; But in C++ (unlike C) the * is usually placed as part of the type. Node<T>* head = nullptr; In C++ the type information is very important so keeping it all together is useful in reading. Also NULL is C code. In C++ we use nullptr - it is type safe (unlike NULL). We can simplify this a bit and make it more readable in one go: void push_front(T value) { Node<T> *new_node = new Node<T>; new_node -> data = value; // Normally we don't add space new_node -> next = head; // around the -> operator. head = new_node; count++; } // How about this? void push_front(T value) { head = new Node<T>{head, value}; ++count; } This is a OK when T is a simple type like int. But what happens when T is LargeObjectWithState or std::vector<std::vector<int>>? void push_front(T value) ^^^^^^^ You are passing the parameter by value. That means there is a copy made before the call is even made (to copy it to the location where parameters are stored). Then inside the function you are copying into a Node object. new_node -> data = value; // Makes another copy of the object. To get around this we normally pass parameters by reference to avoid the first copy. void push_front(T const& value) ^^^^^^ // a reference to the original value. // but we can't change the value. So that gets around one copy. But C++11 added a concept called move semantics that allows us to move objects (rather than copy them). Parameters that are movable are marked with && to indicate that we want to bind to an R-Value reference and that we may steal the content of the object as a result of calling the function. void push_front(T&& value) There is some magic you have to do. When you assign these values you need to make sure you do so in a way that tells the destination that you are moving the object to the destination: // Your function would look like this: void push_front(T&& value) { Node<T> *new_node = new Node<T>; new_node->data = std::move(value); new_node->next = head; head = new_node; count++; } // or how about this? void push_front(T&& value) { head = new Node<T>{head, std::move(value)}; ++count; } Node. You should have both versions of the function: void push_front(T&& value); void push_front(T const& value); There is a third variant called emplacing (but we can get to that in a subsequent review). Basically the same comments for push_back() as push_front(). void push_back(T value) { Here is returning magic numbers? int push_after(T value, T after) { 0: Empty List -1: Value does not exist 1: Value was inserted. Not sure why 0 and -1 are different (in both cases after does not exist in the list). So it you combine these two you could return a boolean value to indicate whether the value was inserted. If you must stick with three then I would create an enum that makes this relationship readable. enum InsertStatus {FailedEmptyList, FailedValueDoesNotExist, OK}; InsertStatus push_after(T value, T after) { This looks very similar to some of the code above. Node<T> *new_node = new Node<T>; new_node -> data = value; new_node -> next = tmp -> next; tmp -> next = new_node; count++; We now have three places where you are calling new Node and then setting up the values and incrementing count. When you have repeated code you may want to put that in a named function, so that if there is a bug you only have to fix it in one place - and it makes reading the code easier. If you removed this if code. if(count == 1) { count--; head = NULL; return 1; } head = head -> next; count--; Would it change the behaviour? If there is only one item, then head->next should be nullptr. So head = head->next; would set head to nullptr. My main issue with this function is that you are leaking the Node. You allocated that node with new, so you should release the memory with delete. Node* old = head; head = head -> next; delete old; // every call to new should be matched with a call to delete. Don't think this will ever be true. if(tmp -> data == value) { delete_back(); cnt++; } You covered this situation in your main loop above. You don't want to use std::endl. It prints \n then flushes the stream. cout << endl; The stream is flushed automatically. You flushing it manually is only going to make it very inefficient. If a function does not change the state of the object, mark it constant. int size() const { // ^^^^^^ return count; }
{ "domain": "codereview.stackexchange", "id": 43346, "tags": "c++, linked-list" }
Is a force on the edge of a disk also a torque if the disk is not constrained?
Question: Suppose you have a disk in a vacuum that is completely unconstrained (not fixed at its center). If you apply a tangential force at a point on the circumference of the disk, does the force act purely as a torque and begin to change the angular momentum of the disk without changing its linear momentum? Or does the disk center also accelerate forward through space? I am split between the scenarios since, on the one hand, a force at a point on the circumference is transferred by the rigid disk to the center, thus accelerating it. But there is also a torque on the body, which changes angular momentum. Can someone explain? Answer: You will have both rotational and translational acceleration since there is a net force acting on the disc. Therefore you will have both angular and linear momentum. If you had two equal but opposite forces acting on opposite sides of the diameter of the disc (a force couple) then you would have pure rotation. Hope this helps
{ "domain": "physics.stackexchange", "id": 60746, "tags": "angular-momentum, torque" }
Using ethzasl_icp_mapping in indigo with velodyne HDL-32E
Question: I am trying to use ethzasl_icp_mapping on Ubuntu 14.04 with ROS Indigo. Following the steps how I proceed : first, I clone the :ethzasl_icp_mapping in my catkin_ws I run catkin_make, no build file is generated of "ethzasl_icp_mapping_project" Here are couple of the questions I have: can someone give me the exact steps to run the project I want to use point cloud from Velodyne HDL-32E Originally posted by lotfishtaine on ROS Answers with karma: 76 on 2016-06-07 Post score: 0 Original comments Comment by Tarantula-7 on 2016-11-17: Could you finally succeed in using with Velodyne HDL-32E Answer: to compile the "ethzasl_icp_mapping" library in "ROS indigo" under "Ubuntu 14.04" it is necessary do the following in catkin_ws / src : 1- git clone -b indigo_devel https://github.com/ethz-asl/ethzasl_icp_mapping.git 2- git clone -b release/indigo/libnabo https://github.com/ethz-asl/libnabo-release.git 3- git clone -b release/indigo/libpointmatcher https://github.com/ethz-asl/libpointmatcher-release.git 4- in catkin_ws execute: catkin_make_isolated 5- in catkin_ws / devel_isolated execute: source setup.bash Originally posted by lotfishtaine with karma: 76 on 2016-06-16 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 24851, "tags": "ros, 3d-slam, ethzasl-icp-mapping, ros-indigo, velodyne" }
Type name as a string
Question: I posted this answer recently. OP in the StackOverflow question mentioned that it was for educational use, but I can imagine it being used as a debugging aid. Then I decided to improve it slightly and make it a compile-time function. Code: #include <cassert> #include <iostream> #include <string_view> #include <type_traits> #include <vector> #ifndef __GNUC__ static_assert(false, "GCC specific"); #endif // !__GNUC__ // Finds a character in a nul terminated string. // Returns the last character if sought one isn't found. // Notes: // 1. Using this because std::strchr isn't constexpr. // `PChr` is a separate typename instead of just using Chr* to allow `Chr*` and // `Chr const*` as the pointer. // 2. pstr shall not be nullptr. template <typename PChr, typename Chr> [[gnu::pure, gnu::nonnull, gnu::returns_nonnull, nodiscard]] static constexpr auto* constexpr_strchr(PChr pstr, Chr const value) noexcept { // PChr must be a raw pointer type because of gnu::nonnull and // gnu::returns_nonnull static_assert(std::is_pointer_v<PChr>, "PChr must be a raw pointer type"); auto constexpr nul = Chr{}; while (*pstr != value && *pstr != nul) { ++pstr; } return pstr; } // Returns distance from ptr to the end of the array. // Notes: // 1. ptr shall not be nullptr // 2. ptr shall be inside the array (arr) template <typename T, auto size> [[gnu::const, gnu::artificial, gnu::nonnull, gnu::always_inline, nodiscard]] inline static constexpr auto distance_to_end(const T (&arr)[size], T const* const ptr) noexcept { return arr + size - ptr; } // Returns type T as a string_view. // Ex: std::string -> "std::string" template <typename T> [[gnu::const, nodiscard]] static constexpr auto type_name_finder() noexcept { // __PRETTY_FUNCTION__ means "$FUNCTION_SIGNATURE [with T = $TYPE]". // +2 here to skip "= " auto const* const begin = constexpr_strchr(__PRETTY_FUNCTION__, '=') + 2; // -2 meaning up to "]\0" auto const size = static_cast<std::size_t>(distance_to_end(__PRETTY_FUNCTION__, begin) - 2); return std::string_view{begin, size}; } // Inline string_view with the type name. template <typename T> inline constexpr auto type_name = type_name_finder<T>(); // Example Class template <typename T1, typename T2> class my_class {}; int main() { // Example use-case my_class<int&, std::vector<double>> my_arr[20]; std::cout << type_name<decltype(my_arr)>; } Code uses __PRETTY_FUNCTION__ in GCC and Clang to get a variable's type as a string. I would appreciate any comments on correctness and readability. I used a lot of GCC specific attributes, because __PRETTY_FUNCTION__ is GCC specific and for this reason the code can't be portable anyway. Answer: You're trying to reinvent the wheel... This specific challenge was discussed in answers to this SO question: Read HowardHinnant's long answer, then read the final, pretty, constexpr answer here. @Snowhawk quoted the code there - now you have some references and the build-up to the final form of the code.
{ "domain": "codereview.stackexchange", "id": 37405, "tags": "c++" }
PSPACE-completeness of DFA intersection problem
Question: Let some deterministic finite automata be given. There is a problem of determining whether the intersection of these DFA is empty, and I want to show its PSPACE-completeness. It seems to me that I understand, why this problem lies in PSPACE: if $ n_1,\dots, n_k $ are the numbers of states, then one can take the product of these DFA and get that if there is a recognisable string, then there is a recognisable string of length $ \le n_1\dots n_k$. Then we can non-deterministically guess this string and use Savitch theorem. So, the question is, how to prove PSPACE-hardness? Should I reduce any given TM to this problem, or maybe some known problem like QBF? Answer: Your proof that the problem is in PSPACE is not quite correct. The problem is that the product $n_1 \cdots n_k$ is not bounded by a polynomial in the input length $n_1 + \dots + n_k$. The correct way to do it is to directly apply Savitch's theorem to the NPSPACE machine that nondeterministically guesses a path through the product graph. The difference is that, while an accepting path can indeed be as long as $n_1 \dots n_k$, the machine only guesses one symbol at a time and never stores the entire input string in memory, so we stay within the polynomial space limit. It is a well-known fact that this problem is PSPACE-hard, but the only proof that I am aware of is not particularly trivial. I believe the first proof was given in: Dexter Kozen. Lower Bounds for Natural Proof Systems. FOCS 1977: 254-266. The proof involves constructing a mess of DFAs which collectively enforce all the necessary constraints to require that any string in the intersection language is a trace of the execution of a polynomial space Turing machine that leads to an accepting configuration.
{ "domain": "cs.stackexchange", "id": 15321, "tags": "complexity-theory, turing-machines, automata, finite-automata, space-complexity" }
What substances are hypergolic with liquid O2?
Question: If something flammable is surrounded just by oxygen, I should see flame, right? The object would be even surrounded by more molecules than in the case of gas oxygen. On the youtube I see that people first light-up the objects and then they add oxygen. That means that the things won't oxidize by themselves. Why? Are there materials that would just burn without prior ignition? Answer: Organic material will burn just fine in liquid oxygen. The reason people add liquid oxygen to already burning material instead of soaking the material in LOX and then lighting it is that they are not idiots. If you soak organic material in LOX before lighting it, it becomes a high explosive, called an oxyliquit.
{ "domain": "chemistry.stackexchange", "id": 10625, "tags": "combustion" }
Mixin and template-heavy code for a Silverlight clone
Question: I've made a mock version of Silverlight in D2. It is meant to output HTML, but since I don't know HTML yet, its current output is in. HTML is not the reason I'm here, though. The goal of my project was to create a framework that uses XAML and MVVM to create a site bound to a viewmodel. It doesn't do too much and isn't dynamic as of yet, but right now I'm just at the first milestone. Before I go any further with the project I'd like some feedback on what I have so far. Specifically, my use of template mixins to achieve polymorphism and override base class methods, even if they contain the exact same code (this problem is explained in code-comments). But at the same time this is my first forray into D and I'm sure there are other things I could use help with, so I'm open to any suggestions. The code is on GitHub for anyone interested. I know it's not very useful but I'm having fun with it. In my opinion, the code explains itself with comments, but I can add more if it's unclear. module darklight; import std.stdio: writeln; enum string __module = "darklight"; //A sample view model class //Can compound classes other than OddVM as well, this is just a demo class OddVM { string text; OddVM inner; OddVM list[]; string amethod(string name) { return "my name is: "~name; } this(string te) { this.text = te; } } //showing off the different controls, as well as binding features and some of the xml capacity enum string _xml = " <StackPanel name=viewModel.text binding=viewModel.inner> <TextBox name=\"itext1\" text=binding.text> </TextBox> <TextBox name=\"itext2\" binding=viewModel.inner text=binding.text /> <TextBox name=\"itext3\" binding=viewModel.inner text=binding.amethod(\"robot\") /> <TextBox name=\"itext4\" binding=viewModel.inner.inner text=\"too deep, would error if I tried binding\"/> <ListPanel name=\"somename\" binding=viewModel.list> <TextBox name=\"itextinside\" text=binding.text/> </ListPanel> <DoubleText name=\"double\" binding=viewModel.inner/> </StackPanel>"; //viewModel when accessed provides the passed in vm //binding is the new one if set, or the same as viewModel if not int main(string[] args) { //create a sample viewModel OddVM vm = new OddVM ("bound"); vm.inner = new OddVM ("bound inner"); vm.inner.inner = new OddVM ("bound inner inner"); OddVM vm1 = new OddVM("first"); OddVM vm2 = new OddVM("second"); OddVM vm3 = new OddVM("third"); vm.inner.list = [vm1, vm2, vm3]; //The vm could be any type of object //It just needs the proper variables as defined in the xml auto res = Control.BuildControl!(_xml)(vm); writeln(res.content()); //change some values in the viewmodel vm.text = "new text"; vm.inner.text = "new inner text"; //can add things to a list and the templates won't be disturbed vm.inner.list ~= vm1; //need to rebuild controls to bind new values res = Control.BuildControl!(_xml)(vm); writeln(res.content()); return 0; } abstract class Control { string name; mixin template codegen(string _xml) { //provides access to some parsed sections of the xml static const string extr[] = parsexml!(_xml); static const string _tag = extr == [] ? "" : extr[0]; static const string _attr = extr == [] ? "" : extr[1]; static const string _inner = extr == [] ? "" : extr[2]; static const string _rem = extr == [] ? "" : extr[3]; static const string _elem = extr == [] ? "" : extr[4]; } mixin template setvarsgen() { private auto setvars(string _xml, T)(T viewModel) { //for access to attr mixin codegen!(_xml); //add the string of assignments for binding to the viewModel mixin(assignstring(_attr)); return binding; } } //This is in a mixin template because when I want it to be called by a subclass //the subclass needs to have it's own implementation of it or the method bindings will crash. //ex: if this was called from TextBox without beind re-mixed-in, it would fail on assigning this.text //This might be a failure of polymorphism with templated methods, or just me mixin template extractgen(string _newxml = null) { static if (_newxml == null) { //a base extraction method, sets variables to the bindings outlined in xml private auto extract(string _xml, T)(T viewModel) { this.setvars!(_xml)(viewModel); return this; } } else { // This one is for controls with custom xaml, usually you shouldn't need to override this // But you'll probably have to do Control.extractgen!(_newxaml) to bypass any overrides private auto extract(string _xml, T)(T viewModel) { // The var setting from this gets overwritten, but we need the binding auto binding = setvars!(_xml)(viewModel); // Distinction here is that _newxml replaces xml auto ret = super.extract!(_newxml)(binding); // This lets xml bindings (ex: name) from the outside override anything on the inside setvars!(_xml)(viewModel); return ret; } } } mixin setvarsgen!(); mixin extractgen!(); //Outputs the html/other things representing the control abstract pure string content(); static auto BuildControl(string _xml, T)(T viewModel) { mixin codegen!(_xml); mixin(_tag~" control = cast("~_tag~") Object.factory(\""~__module~"."~_tag~"\");"); return control.extract!(_xml)(viewModel); } } class TextBox : Control { string text; //the new class variable text means these two have to be re-added mixin setvarsgen!(); mixin extractgen!(); pure string content() { return "<p "~this.name~">" ~ this.text ~ "<p>\n"; } } class StackPanel : Control { Control controls[]; mixin template extractgen() { //overrides the extract method, adds inner controls private auto extract(string _xml, T)(T viewModel) { mixin codegen!(_xml); auto binding = setvars!(_xml)(viewModel); this.controls = this.chain!(_inner)(binding); return this; } } mixin setvarsgen!(); mixin extractgen!(); pure string content() { string ret; foreach(contr; controls) { ret ~= contr.content(); } return "<div "~this.name~">\n"~ret~"<div>\n"; } private Control[] chain(string _xml, T)(T viewModel) { mixin codegen!(_xml); //this 5 is completely arbitrary, limits you w.r.t. the xml static if (_rem.length > 5) { return [cast(Control)BuildControl!(_elem)(viewModel)] ~ chain!(_rem)(viewModel); } else { return [cast(Control)BuildControl!(_xml)(viewModel)]; } } } //example specialization of existing control class ListPanel : StackPanel { //this is necessary because templates don't override well... //without this, the this.chain! call doesn't call ListPanel.chain! it calls StackPanel.chain! //I described this earlier mixin extractgen!(); //setvarsgen! doesn't need to be here since there are no new class variables private Control[] chain(string _xml, T)(T viewModel) { Control ret[]; foreach(vm; viewModel) { ret ~= BuildControl!(_xml)(vm); } return ret; } } //example of user defined xml for a new control class DoubleText : StackPanel { enum _xmlstring = " <StackPanel name=\"doubletext type\"> <TextBox name=\"itext1\" text=binding.text> </TextBox> <TextBox name=\"itext2\" text=\"danger zone!\"> </TextBox> </StackPanel>"; //this will call super.extract, while using the new xml //"Control." is necessary for this control and sometimes others because their parents overwrite extractgen! without defining a string constructor //in other cases you might need to rewrite extract! mixin Control.extractgen!(_xmlstring); } /* * THE CODE BEYOND THIS POINT IS EMBARRASSING AND SHAKY, BUT SERVES IT'S PURPOSE * It's for slightly picky xml parsing * It's not as bad as it once was... */ //returns a string of code to be mixin'd, that will do the binding //text=\"hi\" -> this.text="hi"; //text=method() -> this.text=method(); //binding=something -> auto binding = something; //if not defined, auto binding = viewModel; is added. //binding is always set on the first line //could additionally be modified to allow for method calls without assignment, but not sure if needed pure string assignstring(string content) { string ret; int start, mid, i; bool tracking, pastmid, strtracker, bound; void falsify() { tracking = false; pastmid = false; strtracker = false; } pure bool isformatchar(char a) { return (a == ' ' || a == '\n' || a=='\t' || a=='<' || a=='>' || a=='/' || a=='\\'); } string bindingconcat(string ret, string content, int start, int mid, int i) { string first = content[start..mid]; string second = content[mid+1..i]; assert(first~"="~second == content[start..i]); if (first == "binding") { bound = true; return "auto binding = "~second~";\n"~ret; } else { return ret~"this."~content[start..i]~";\n"; } } for(; i < content.length; i++) { if (!tracking && !isformatchar(content[i])) { start = i; tracking = true; } if (tracking && content[i]=='=') { pastmid = true; mid = i; if (i+1 < content.length && content[i+1] == '"') { strtracker = true; i++; } } else if (tracking && !pastmid && isformatchar(content[i])) { falsify(); } else if (tracking && pastmid && strtracker && content[i] == '"') { falsify(); ret ~= "this."~content[start..i+1]~";\n"; } else if (tracking && pastmid && !strtracker && isformatchar(content[i])) { falsify(); ret = bindingconcat(ret,content,start,mid,i); } } if (tracking && pastmid) { ret = bindingconcat(ret,content,start,mid,i); } if (!bound) { ret = "auto binding = viewModel;\n" ~ ret; } return ret; } template parsexml(string xml) { enum parsexml = xmlparse(xml); } pure string[] xmlparse(string xml) { //tag, attr, inner, remainder, elem int b[5]; //"|<jkh| |> |</jkh|>"; int open; int i = -1; while(++i < xml.length) { if (xml[i] == '<') { b[0] = i; open = 1; break; } } while(++i < xml.length) { if ((xml[i] == ' ' || xml[i] == '\n') && b[1] == 0) { b[1] = i; } if (xml[i] == '>') { break; }; if (open == 1 && xml[i] == '/' && xml[i+1] == '>') { --open; b[3] = i; b[4] = i+1; break; } } b[1] = b[1] == 0 ? i : b[1]; b[2] = i; while(open > 0 && ++i < xml.length-1) { if (xml[i] == '<') { open += xml[i+1]=='/' ? -1 : 1; } if (xml[i] == '/' && xml[i+1] == '>') { --open; } } b[3] = b[3] == 0 ? i : b[3]; while (b[4] == 0 && ++i < xml.length) { if (xml[i] == '>') { break; } } b[4] = b[4] == 0 ? i : b[4]; //"|<jkh| |> |</jkh|>"; string ret[]; //tag, attr, inner, remainder, elem ret ~= xml[b[0]+1..b[1]]; ret ~= xml[b[1]..b[2]]; ret = ret ~ ((b[2]==b[3]) ? "" : xml[b[2]+1..b[3]]); ret ~= xml[b[4]+1..$]; ret ~= xml[b[0]..b[4]+1]; if (b[2] != b[3]) { assert(ret[0] == xml[b[3]+2..b[3]+(b[1]-b[0])+1]); } return ret; } The output when this is run is the following, which contains two runs of the control builder, to demonstrate viewModel changes. I know it's not real HTML; that's something I have yet to learn. <div bound> <p itext1>bound inner<p> <p itext2>bound inner inner<p> <p itext3>my name is: robot<p> <p itext4>too deep, would error if I tried binding<p> <div somename> <p itextinside>first<p> <p itextinside>second<p> <p itextinside>third<p> <div> <div double> <p itext1>bound inner inner<p> <p itext2>danger zone!<p> <div> <div> <div new text> <p itext1>new inner text<p> <p itext2>bound inner inner<p> <p itext3>my name is: robot<p> <p itext4>too deep, would error if I tried binding<p> <div somename> <p itextinside>first<p> <p itextinside>second<p> <p itextinside>third<p> <p itextinside>first<p> <div> <div double> <p itext1>bound inner inner<p> <p itext2>danger zone!<p> <div> <div> Answer: Here's what I've settled with in case anyone is interested: Using the factory pattern I was able to build an injectable factory. Then using dependency injection I forward the factory through all the control extraction steps. This has the advantage of allowing a control class to instantiate a sub-control inside itself without worrying about importing the proper modules (and avoiding circular references). The extract method is now able to access all the member variables because it is passed the type (as ForwardRefT), instead of using the class's own, bypassing the lack of polymorphism for templated methods. Thankfully, this also removes the need to add template mixins everywhere, which was ugly. Now that you've read it and are fully prepared for the wackyness to come, here is the code: //This factory is mixed in once, at the highest level of your web app. //This way it can reference every module with controls in it and inject itself into them, giving them that ability as well. mixin template FactoryGen() { import darklight.parse : parse_tag; struct ControlFactory { static auto Extract(string _xml, T)(T viewModel) { //parse_tag takes "<StackPanel ...> ... </StackPanel> and returns "StackPanel" enum Ty = parse_tag!(_xml); mixin("return "~Ty~".extract!(ControlFactory, _xml, "~Ty~", T)(viewModel);"); } } } abstract class Control { override abstract string toString(); static auto extract(alias F, string _xml, ForwardRefT, T)(T viewModel) { ForwardRefT self = new ForwardRefT(); //parse_attr takes "<StackPanel {0}> ... </StackPanel>" and returns {0} mixin(assignstring(parse_attr!(_xml))); return self; } } abstract class ContainerControl(int I=0) : Control { static if (I == 0) { Control[] controls; } else { Control[I] controls; } override string toString() { string content; static if (I == 0) { content = reduce!("a~b.toString()")("",this.controls); } else { foreach(control; this.controls) { content ~= control.toString(); } } return format(` <div>%s </div>`, content); } } abstract class StackControl(int I=0) : ContainerControl!(I) { static auto extract(alias F, string _xml, ForwardRefT, T)(T viewModel) { ForwardRefT self = new ForwardRefT(); mixin(assignstring(parse_attr!(_xml))); //parse_inner takes "<StackPanel ...> {0} </StackPanel>" and returns {0} //subparse takes "<1></1><2></2>" and returns ["<1></1>","<2></2>"] enum contents = subparse(parse_inner!(_xml)); static assert(I == 0 || contents.length == I); Chain!(F, contents)(self, vm); return self; } static void Chain(alias F, alias inners, ForwardRefT, T)(ForwardRefT self, T viewModel) { static if (inners.length > 0) { Control next = F.Extract!(inners[0])(viewModel); static if (I == 0) { self.controls ~= next; } else { self.controls[I-inners.length] = next; } Chain!(F, inners[1..$])(self, viewModel); } } } class StackPanel : StackControl!() { } class ListPanel : ContainerControl!(0) { static auto extract(alias F, string _xml, ForwardRefT, T)(T viewModel) { ForwardRefT self = new ForwardRefT(); //assignstringPlus is likeassignstring but makes sure there is a variable named "list", defaulting it to null if it isn't in the xml mixin(assignstringPlus(parse_attr!(_xml),"list","null")); enum contents = subparse(parse_inner!(_xml)); static assert(contents.length == 1); foreach(item; list) { self.controls ~= F.Extract!(contents[0])(item); } return self; } } I'm very pleased with the results as it makes user code much more manageable and cleaner without the mixins. The extract method may look nasty, but the only really scary part is the method signature. Users shouldn't ever have to write another extract method, but if they do it should be much easier than it was in the past.
{ "domain": "codereview.stackexchange", "id": 3185, "tags": "template-meta-programming, mixins, d, silverlight" }
How can i get this way to create random data?
Question: I need to create random data using this lines n_samples = 3000 X = np.concatenate(( np.random.normal((-2, -2), size=(n_samples, 2)), np.random.normal((2, 2), size=(n_samples, 2)) )) but didn't get the difference between two lines of random here . I got that this way be used to concatenate two random numbers to create 2 clusters but why one of them using (-2,-2) and the other (2,2) and does 2 in this size because concatenate using to merge 2 groups of random data or not ? Answer: Providing multiple values to either the loc or scale arguments can be used to generate multiple random distributions at once with different parameters. In the code you provided the values for the loc argument are the same, meaning that you could also just use the value -2 instead of (-2, -2). You can see this when fixing the seed and generating new numbers import numpy as np np.random.seed(0) print(np.random.normal((-2, -2), size=(5,2))) # [[-0.23594765 -1.59984279] # [-1.02126202 0.2408932 ] # [-0.13244201 -2.97727788] # [-1.04991158 -2.15135721] # [-2.10321885 -1.5894015 ]] np.random.seed(0) print(np.random.normal(-2, size=(5,2))) # [[-0.23594765 -1.59984279] # [-1.02126202 0.2408932 ] # [-0.13244201 -2.97727788] # [-1.04991158 -2.15135721] # [-2.10321885 -1.5894015 ]] The different between the two lines is that one is generating random noise from a normal (Gaussian) distribution with a mean of -2 and the other from a mean of 2, see also the loc keyword in the documentation.
{ "domain": "datascience.stackexchange", "id": 11263, "tags": "python, numpy, gaussian" }
Can someone explain diamagnetism in plasma?
Question: Diamagnetic plasma has an internal resistance to an outside magnetic field. I thought that a good analogy for this would be electrical resistivity. In high school we learned that different materials would resist externally applied electric fields differently. You put the same voltage across a block of wood or plastic or metal - you get a different current - like ohms law. What if we viewed plasma the same way? You put the same magnetic field across different plasmas and they resist conducting the magnetic field. They have a diagmagnetic constant (like solids do) and it is directly analogous to electrical resistivity. I drew it out: Diamagnetism is never explained this way. People always talk about inducing magnetic fields inside plasmas, either by motion, organization or by externally applied fields. My questions: Anyone have a text which looks at plasma this way? What is the function predicting this diamagnetic constant? I would guess: diamagnetic constant = Function (plasma density, temperature, composition, ect...) Answer: So I looked into this a little more and think I have an answer. If we assume a force-free situation, then we can write: $$ \mathbf{J} \times \mathbf{B} = c_{o} \nabla \cdot \mathbb{P} $$ where $\mathbf{J}$ is the current density, $\mathbf{B}$ is the magnetic field vector, $c_{o}$ is some constant, and $\mathbb{P}$ is the pressure tensor. If we decompose vectors into those parallel and perpendicular to $\mathbf{b}$ = $\mathbf{B}$/B as: $$ \mathbf{A}_{\parallel} = \mathbf{b} \left( \mathbf{b} \cdot \mathbf{A} \right) \\ \mathbf{A}_{\perp} = \left( \mathbb{I} - \mathbf{b} \mathbf{b} \right) \cdot \mathbf{A} \\ = \mathbf{b} \times \left( \mathbf{A} \times \mathbf{b} \right) $$ then we can define the diamagnetic current as: $$ \mathbf{J}_{\perp} = \frac{c}{B} \mathbf{b} \times \left( \nabla \cdot \mathbb{P} \right) $$ This has an obvious simplification whereby one assumes a scalar pressure so that the term on the right-hand side goes to $\mathbf{b} \times \nabla P$. From this you can see that $\mathbf{J}_{\perp}$ will act to decrease $\mathbf{B}$ in regions of larger pressure. You can also show that $\nabla \cdot \mathbf{J}_{\perp} \neq 0$, which is related to the assumption of quasi-neutrality. Regardless, the end result is that the plasma thermal pressures act against the magnetic pressures and can, in some situations (e.g., solar prominences), result in force balance.
{ "domain": "physics.stackexchange", "id": 21468, "tags": "plasma-physics" }
Is there any examples to use approximate constraint manifold in moveit?
Question: Hi all, I intend to do planning with approximated constraint manifold to make sure that the end effector can only rotate along the z_axis of the base link. Following the tutorial, I generated and loaded the database, but not sure know how to use it in move_group c++ interface. Another question: Can I load two constraint databases when launching move_group? or Can I include two constraints in one database? Thanks for any help! Originally posted by xibeisiber on ROS Answers with karma: 137 on 2021-11-03 Post score: 0 Original comments Comment by gvdhoorn on 2021-11-08:\ Another question: Can I load two constraint databases when launching move_group? or Can I include two constraints in one database? Do not post follow-up questions as edits to questions which have already been answered. Your follow-up question has almost 0 visibility. You should post a new question. Comment by v4hn on 2021-11-08: Gijs is right, I would not have seen this without his comment. I edited the answer below. Answer: as answered by @v4hn here: https://github.com/ros-planning/moveit/issues/2941. The move_group node loads the database on startup and registers the name of the constraint that you defined when you generated the database. To use it for planning you just have to set a path constraint with the same constraint name. Make sure you also add the full constraint specification there as well though because this message is still used to check transitions between the stored states. Edit for second question: You configure the constraint database by setting the folder path as a ros parameter: <param name="move_group/constraint_approximations_path" value="$(find package)/cadb"/> I believe any constraint approximation database file you store in that folder will be considered at runtime. Originally posted by xibeisiber with karma: 137 on 2021-11-03 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by xibeisiber on 2021-11-09: Thanks for your answer. I managed to do this by setting the folder path and combining the manifest info of two databases into one manifest file.
{ "domain": "robotics.stackexchange", "id": 37083, "tags": "moveit, ros-melodic" }
Why don't metals bond when touched together?
Question: It is my understanding that metals are a crystal lattice of ions, held together by delocalized electrons, which move freely through the lattice (and conduct electricity, heat, etc.). If two pieces of the same metal are touched together, why don't they bond? It seems to me the delocalized electrons would move from one metal to the other, and extend the bond, holding the two pieces together. If the electrons don't move freely from one piece to the other, why would this not happen when a current is applied (through the two pieces)? Answer: I think that mere touching does not bring the surfaces close enough. The surface of a metal is not perfect usually. Maybe it has an oxide layer that resists any kind of reaction. If the metal is extremely pure and if you bring two pieces of it extremely close together, then they will join together. It's also called cold welding. For more information: What prevents two pieces of metal from bonding? Cold Welding
{ "domain": "physics.stackexchange", "id": 14076, "tags": "material-science, metals" }
How to escape the center of a room without gravity?
Question: Imagine you're an astronaut on the International Space Station and your fellow astronauts played a prank on you by taking all your clothes and putting you in the center of a module so that you cannot reach anything with either your hands or your feet. What would be the most effective way to escape that situation if you're reluctant to start peeing? Answer: Since you have air around you, you can just take a deep breath and blow it out. There's actually no need to turn you head while doing this, as some other answers suggest. Air has a high Reynolds number at human scales, and so the scallop theorem does not hold: even though the movements of inhaling and exhaling are reciprocal, the airflows they create are not. (You can test this yourself by holding a hand in front of your mouth: you can easily feel the jet of air created by blowing out, even with your hand fully extended, but you can't produce a "reverse jet" no matter how hard you inhale.) In practice, the momentum produced by inhaling is pretty negligible, as air flows in towards your mouth and nose from all sides, and so the only thing that matters is which way you exhale. By blowing air out of your mouth in one direction, you create a net airflow in that direction, and so, by conservation of momentum, propel yourself in the opposite direction. It works for squid and jellyfish (and scallops!), and it will work for you, too. Maybe not very efficiently, but surely enough to reach a wall in the tight confines of the ISS. Now, if we ever start building space stations with huge air-filled bubbles hundreds of meters across, then this might become a problem, but until then you should be fine. Besides, you may not even need to resort to such huffing and puffing. Any real space station designed for human habitation in microgravity needs to have active air circulation fans anyway, both for heat distribution (important for both humans and equipment, since convection doesn't work in microgravity) and to keep exhaled air from accumulating around your body e.g. when you're sleeping. So in practice, the air around you will be moving slowly anyway, and you just need to wait until this ambient airflow pushes you close to a wall. And of course, on the actual ISS, I doubt there's even any space big enough to properly pull off this prank. The largest open spaces on the ISS, like the Kibo pressurized module, are surrounded by ISPRs that are about 2 meters (6 ½ feet) wide, effectively making the interior cross-section a 2×2 meter square. Even if your crewmates somehow managed to position your body lengthwise along the center axis of an otherwise empty module so that you couldn't just reach out and grab a handhold, you'd just need to twist around like a cat (or, more likely, just flail around semi-randomly) until you managed to turn yourself 90° around, at which point either your toes or your hands should surely be able to reach a wall.
{ "domain": "physics.stackexchange", "id": 33219, "tags": "gravity, momentum" }
Decoherence via environmental photons
Question: I am reading the book "Decoherence and the Quantum-to-Classical Transition" by Maximilian A. Schlosshauer, and I have come to understand that for a two-level system with eigenstates $|a\rangle$, $|b\rangle$ for which the environmental respectively adopts eigenstates $|E_a\rangle$, $|E_b\rangle$ corresponding to the system states, the suppression of the coherent terms in the $a,b$ basis is proportional to $\langle E_a|E_b\rangle$. I also learned the general formula for how scattering of light environmental particles carries away information about a system, thereby reducing the environmental wavefunction overlap and decohering it. My question is: what about the kind of decoherence caused by photons either being absorbed or not being absorbed in a given molecule. For instance, if a molecule A absorbs at frequency $\omega$ in its ground state, and the excited molecule $A*$ absorbs at frequency $\omega_1$, it seems reasonable to me that irradiating the sample with $\omega$-frequency photons should suppress the ability of $A$ to exist in a coherent superposition of excited and ground states. However, I can't think of a mathematical theorem for what this decoherence would look like, and I can't find one in my book either. Does anyone know what theorem describes this specific type of decoherence? Answer: Yes, shining light which is more/less likely to be absorbed by the excited vs unexcited states will certainly cause decoherence, as will any interaction which carries away information about the state of the system into the environment. As Schlosshauer discusses, this effect explains why it's so hard to keep atoms in coherent superpositions of energy eigenstates (see discussion in 2.8.2, although the assumption that $H_S >> H_{int}$ is not good in the case of laser light with enough energy to excite an atom). Regarding a "mathematical theorem for what this decoherence would look like", the procedure would be very similar to that for photon scattering without absorbtion, but you'd have to use a Fock space to describe the system since the number of particles changes, and the S matrix would also encode information about photon creation/destruction. I recommend reading the section on Second Quantization in Sakurai (starts on page 461 of the second edition) or another good quantum mechanics text if you aren't familiar with Fock spaces. If a photon was definitely absorbed by an unexcited atom and definitely not absorbed by an excited atom, then the overlap between these states would be zero. A more realistic interaction would result in a final states which are some superposition of photon absorbed/scatter/unchanged and the overlap would be non-zero.
{ "domain": "physics.stackexchange", "id": 93625, "tags": "quantum-mechanics, photons, absorption, decoherence" }
How to build pcl in_hand_scanner
Question: I'm using ROS Groovy on Ubuntu 12.04. The PCL I use is pcl17 downloaded and compiled from source(refer to this page). The compilation finished without errors. I'm trying to learn recognition, and I found the 3D pointcloud model that is to be recognized in the scene is usually required. And then I found this, the in-hand-scanner in the tutorial documentation, that seems to help me build my own 3D pointcloud model for recognition. However, I tried to find it in this folder: ~/pcl_overlay/install_isolated/bin and there is no executable file which has the name related to in-hand-scanner. The folder is not empty. There are several executable files that looks related to the applications in this folder: ~/pcl_overlay/src/pcl_unstable/pcl/pcl_trunk/src/apps There is a folder named in_hand_scanner in the apps folder, too. I found some discussion on the Internet and set -DBUILD_apps=ON in the CMakeLists.txt of the package, and then compile it again. After compilation there are more executable files appear in the bin folder such as pcl_openni_tracking, but still, there is no pcl_in_hand_scanner. What should I do to generate the in-hand-scanner executable file, or is there a better way to build a 3D pointcloud model? Thanks~ Originally posted by Albert K on ROS Answers with karma: 301 on 2013-07-13 Post score: 0 Answer: You might have to enable the other flag BUILD_app_in_hand_scanner. When you run CMake, you should see a summary of the apps that it will build (usually saying XXXapp is disabled). Originally posted by vhwanger with karma: 52 on 2013-11-22 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 14906, "tags": "ros, pcl, object-recognition" }
A contradiction in Nonrelativistic Quantum Field Theory
Question: Reference : "Field Quantization" by W.Greiner & J.Reinhardt, Edition 1996. In the above reference as concerns the Hamilton density $\:\mathcal H\:$ and the Hamiltonian $\:H\:$ of the Schrodinger field, we read : The Hamilton density is \begin{equation} \mathcal H = \pi\dfrac{\partial\psi}{\partial t}-\mathcal L = \dfrac{\hbar^2}{2m}\boldsymbol\nabla\psi^*\boldsymbol{\cdot\nabla}\psi+V\left(\boldsymbol x,t\right)\psi^* \psi\,, \tag{3.6}\label{3.6} \end{equation} which, after an integration by parts, leads to the Hamiltonian \begin{equation} H=\int\mathrm d^3\boldsymbol x\,\mathcal H\left(\boldsymbol x\right) =\int\mathrm d^3\boldsymbol x\,\psi^*\left(\boldsymbol x\right)\left(-\dfrac{\hbar^2}{2m}\nabla^2+V\left(\boldsymbol x,t\right)\right)\psi\left(\boldsymbol x\right) \tag{3.7}\label{3.7} \end{equation} The path from expression \eqref{3.6} to \eqref{3.7} is based on the relation \begin{equation} \int\mathrm d^3\boldsymbol x\left(\boldsymbol\nabla\psi^*\boldsymbol{\cdot\nabla}\psi\right) = -\int\mathrm d^3\boldsymbol x\left(\psi^*\nabla^2 \psi\right) \tag{01}\label{01} \end{equation} I didn't have trouble proving the above equation. The trouble here is the contradiction between equating a real number (left hand side) to a complex in general number (right hand side). So, the question is if equation \eqref{01} is wrong or is a priori accepted even with this contradiction. Answer: There is a seeming contradiction as the integrands are in fact as you described them ($\boldsymbol{\nabla} \psi ^{*} \cdot \boldsymbol{\nabla} \psi \in \mathbb{R}$ while $\psi ^{*} \nabla ^{2} \psi \in \mathbb{C}$). However, the equality is in regards to their volume integrals which can be shown to be strictly real. That is because the resulting surface terms from the integration by parts are assumed to vanish at the boundaries of the integral (basically at infinity). Let's write the complex $\psi (x)$ as: \begin{equation} \psi (x) = \phi (x) + i \, \chi (x) \, , \quad \phi (x) , \chi (x) \in \mathbb{R} \end{equation} The real quantity you pointed out shall be: \begin{equation} \boldsymbol{\nabla} \psi ^{*} \cdot \boldsymbol{\nabla} \psi = (\boldsymbol{\nabla} \phi )^{2} + (\boldsymbol{\nabla} \chi )^{2} \end{equation} While the complex quantity is: \begin{equation} \psi ^{*} \nabla ^{2} \psi = \phi \nabla ^{2} \phi +i \, (\phi \nabla ^{2} \chi - \chi \nabla ^{2} \phi) + \chi \nabla ^{2} \chi \end{equation} The imaginary part of the latter equality can be integrated by parts: \begin{equation} \int i \, (\phi \nabla ^{2} \chi - \chi \nabla ^{2} \phi) \, d^{3}x = i\, \Big[\phi \mathbf{\hat{r}} \cdot \boldsymbol{\nabla} \chi - \chi \mathbf{\hat{r}} \cdot \boldsymbol{\nabla} \phi \Big]_{\mathcal{V} } + \int i \, (\boldsymbol{\nabla} \chi \cdot \boldsymbol{\nabla} \phi - \boldsymbol{\nabla} \phi \cdot \boldsymbol{\nabla} \chi) \, d^{3}x \end{equation} where $\Big[...\Big]_{\mathcal{V} }$ implies evaluating the argument in the brackets at the limits if the integral and $\mathbf{\hat{r}}$ the unit vector of the gradient. The integral is identically 0, so the only complex part remaining are the surface terms in the bracket. But since in your proof you implicitly assumed that either $\psi$ or $\boldsymbol{\nabla} \psi$ (or both) vanish at infinity, this follows trivially for its constituent parts $\phi$ and $\chi$. Thus the complex surface term is also 0. EDIT: A minor addition, we may also show the vanishing of the imaginary part of the integral using Green's theorem as mentioned in the comments. Like you said: \begin{equation} \int _{\mathcal{V}} (\phi \nabla ^{2} \chi - \chi \nabla ^{2} \phi) \, d^{3}x = \int _{\mathcal{V}} \boldsymbol{\nabla} \cdot (\phi \boldsymbol{\nabla} \chi - \chi \boldsymbol{\nabla} \phi) \, d^{3}x = \int _{\partial \mathcal{V}} (\phi \boldsymbol{\nabla} \chi - \chi \boldsymbol{\nabla} \phi) \cdot d\mathbf{S} \end{equation} In order to perform this, we initially assume a finite volume $\mathcal{V}$ so that a surface boundary $\partial \mathcal{V}$ may be defined which, again as you said, we shall send to infinity i.e. $\mathcal{V}$ becomes all of space. We therefore assume the finite $\mathcal{V}$ is a sphere with a boundary being its surface at the maximum radial distance $R$. Hence the final integral is: \begin{equation} \int _{0} ^{2\pi} d\varphi \int _{0} ^{\pi} \sin{\theta} \, d\theta \, (\phi \boldsymbol{\nabla} \chi - \chi \boldsymbol{\nabla} \phi) \Big |_{\rho = R} \cdot \boldsymbol{\hat{\rho}} = \int _{0} ^{2\pi} d\varphi \int _{0} ^{\pi} \sin{\theta} \, d\theta \, \left (\phi \frac{\partial \chi}{\partial \rho} - \chi \frac{\partial \phi}{\partial \rho} \right ) \Bigg |_{\rho = R} \end{equation} Now for all of space, send $R \rightarrow +\infty$. Thus you once again get the surface terms at infinity where either the functions themselves or their derivatives (or both) vanish: \begin{equation} \left (\phi \frac{\partial \chi}{\partial \rho} - \chi \frac{\partial \phi}{\partial \rho} \right ) \Bigg |_{\rho = +\infty} = 0 \end{equation}
{ "domain": "physics.stackexchange", "id": 89192, "tags": "field-theory, hamiltonian, boundary-conditions, complex-numbers, boundary-terms" }
Did NASA/JPL get "waning" and "waxing" backwards in this video?
Question: The NASA JPL video What's Up: September 2019 Skywatching Tips from NASA says (per both the closed captions and audio): We’re in a several-month period right now when the new Moon falls right around the end of each month. This means we get to enjoy lovely waning crescent moons at dusk for the first few days of each month, and delightful waxing crescents in the predawn sky near the end of each month. Are "waning" and "waxing" used correctly here or is it just the opposite of what's said? Would the video be correct if "waning" and "waxing" were replaced by "setting" and "rising"? Answer: I think they did make a mistake. The moon waxes after a new moon and wanes before a new moon. To wax means to grow bigger or stronger (and is cognate with "waist"!). To wane means to become smaller or weaker (cognate with "waste" and "vaccum"). The moon that sets shortly after sunset in the evening is a waxing moon. The moon that rises shortly before sunrise in the morning is a waning moon.
{ "domain": "astronomy.stackexchange", "id": 3945, "tags": "the-moon, amateur-observing, moon-phases, nasa" }
Can gas be made to block radiation better?
Question: Can any gas block radiation? I ask this because I would like to know if the properties of any gas element would chemically react differently with radiation from adding an electrical current. This picture shows an Earth magnet's field deflecting plasma naturally in a vacuum. Does any gas extend the electromagnetic shielding properties of a magnetic field? I am interested if the lack or composition of an atmosphere's chemical properties would inhibit or complement the deflection and absorbing of cosmic radiation when coupled with a magnetic field. Can contained pure $\ce{O2 O1 O3}$ be cycled back with electricity to replenish $\ce{O3}$ broken down by radiation while giving the gas an electromagnetic field? Answer: Your question seems to be about the ozone layer, but shows some misunderstanding. First, ozone, $\ce{O3}$, absorbs some "radiation", specifically electromagnetic radiation, e.g. visible light or ultraviolet light (UV), as do many other gases. This absorption is not the same for all wavelength ("colors"), but in the UV region peaks about 250 nm, which is a good thing for us, because plants and animals have not evolved good protection from that light, and the small amount of $\ce{O3}$ in the upper atmosphere is enough to block most of that UV. Sulfur dioxide, $\ce{SO2}$, released in volcanic eruptions, is also able to absorb 250 nm UV. Molecular oxygen, $\ce{O2}$, absorbs "vacuum UV", 10-100 nm, which otherwise would be a problem. Second, Earth is continually bombarded by another type of "radiation", particles such as cosmic rays and solar energetic particles. Rather than electromagnetic energy, these are made of particulate matter such as protons or nitrogen nuclei. All gases, in fact all matter, help block these particles. Magnetic fields such as that of the Earth help divert them around the planet, too, but there is some question whether a magnetic field is essential to blocking particulate radiation. Here at the bottom of the atmosphere there is a column of air weighing about a kilogram over each square centimeter, which is sufficient to block primary cosmic rays, though a showers of weaker secondary particles do reach the surface. There is no advantage to a conductive Faraday cage to protect the Earth. Third, all gases conduct electricity when ionized. The majority of air is nitrogen, oxygen and argon, which produce various colors in an aurora. Though magnificent to watch, aurorae are simply evidence of energetic particles trapped in a magnetic field, ionizing the upper atmosphere. This layer of the atmosphere is called the ionosphere, and is responsible for long-distance radio communication. It's height varies from day to night and with solar storms.
{ "domain": "chemistry.stackexchange", "id": 9979, "tags": "experimental-chemistry, electrochemistry, quantum-chemistry, electromagnetic-radiation" }
Is the Arctic sea ice volume increasing?
Question: According to the graph below, Arctic sea ice volume has been decreasing from 1979-2012: Though this graph from the ESA’s CryoSat satellite show that the volume of Arctic sea ice increased 50% from 2012 to 2013: What is the explanation for the difference accounts of Arctic sea ice volume trends? Answer: This question has already been answered on Skeptics.SE: Did the Arctic Ice Sheet grow by 60% from 2012 to 2013? The short answer is that those two graphs are consistent with each other. The first shows a 33 year trend of declining Arctic sea ice (summer minimum, I think), whereas the second highlights inter-annual variation in Arctic sea ice over the last four years. We don't expect long-term trends to be strictly monotonic increases or decreases, so both positive or negative year-on-year differences, such as those between 2012 and 2013, can be consistent with the trend.
{ "domain": "earthscience.stackexchange", "id": 118, "tags": "sea-ice, cryosphere" }
How is the bond length calculated from the total electronic energy?
Question: In the quantum chemistry course that I currently attend, it was said several times that one of the key quantities derived from molecules by means of computation is the total electronic energy $E_\text{el}$, because from this many derived quantities can be calculated, such as bond length or $J$-coupling values for NMR experiments. However, I've never gotten a good explanation as to how to proceed when I've calculated a value for $E_\text{el}$ and want to find out the bond lengths in the molecule in question. What are the formulae and/or computation steps necessary for this? Answer: The traditional way of getting the right geometry for your system is by taking a fixed atomic structure, perform an electronic structure minimization, calculate the forces acting on the atoms from that, use them to alter the atomic structure according to the forces and then use the altered structure to start over again. This is repeated until you converge to the equilibrium geometry. However there is an alternative (quite elegant) way for DFT calculations using pseudopotentials or the PAW method with plane-wave basis sets: the ab-initio molecular dynamics method introduced by Roberto Car and Michele Parrinello. The "killer feature" of the Car-Parrinello method is that it allows the simultaneous solution of the electronic structure problem and the equations of motion for the nuclei which allows for the relaxation of the nuclei to find stable structures, i.e. the equilibrium geometry, as well as for thermal simulations of solids and liquids - so this method does offer much more than a simple geometry optimization. How does it achieve that? In the Car-Parrinello approach, the total Kohn-Sham (KS) energy is the potential energy as a function of the positions of the nuclei. The forces from this energy then determine the molecular dynamics (MD) for the nuclei. The special feature of the Car-Parrinello algorithm is that it also solves the quantum electronic problem using MD. This is accomplished by deriving the equations of motion from a fictitious Lagrangian which contains a fictitious kinetic energy of the electronic (KS) wave functions and some constaints to ensure that the wave functions remain orthonormal: \begin{align} \mathcal{L} &= \sum_{n} f_{n} m_{\psi} \langle \dot{\psi}_{n} | \dot{\psi}_{n} \rangle + \frac{1}{2} \sum_{i} M_{i} \dot{\vec{R}}_{i} \! {}^{2} + E_{\mathrm{DFT}}[\psi_{n}, \vec{R}_{i}] - \sum_{m, n} \Lambda_{n,m} \left( \langle \psi_{m} | \psi_{n} \rangle - \delta_{n,m} \right) \end{align} where $f_{n}$ is the occupation of the $n^{\mathrm{th}}$ Kohn-Sham eigenstate $\psi_{n}$, $M_{i}$ and $\vec{R}_{i}$ are the mass and the position of the $i^{\mathrm{th}}$ nucleus, $\delta_{n,m} = \begin{cases}1 & n=m \\ 0 & n \neq m \end{cases}$ is the Kronecker delta, $\dot{\psi}_{n} = \frac{\mathrm{d} \psi_{n}}{\mathrm{d}t}$ and $\dot{\vec{R}}_{i} = \frac{\mathrm{d} \vec{R}_{i}}{\mathrm{d}t}$ are the first derivatives with respect to time and $\Lambda_{n,m}$ is a Lagrange multiplier ensuring the constraint of wave function orthonormality. The first term in this Lagrangian represents the fictitious kinetic energy of the wave functions. There is no physical correspondence for this term. Ideally one would choose the fictitious mass $m_{\psi}$ of the wave functions equal to zero. This introduction of this unphysical quantity also lead to naming the Lagrangian fictitious. The second term describes the classical kinetic energy of the nuclei. The third term $E_{\mathrm{DFT}}[\psi_{n}, \vec{R}_{i}]$ is the density functional total energy, which is a functional of the electronic wave functions and the atomic positions. The last term is the constraint of orthonormal wave functions $| \psi_{n} \rangle$. Applying the principle of least action to this Lagrangian leads to the following Euler-Lagrange equations (equations of motion for the ab initio MD simulation): \begin{align} M_{i} \ddot{\vec{R}}_{i} &= \underbrace{- \nabla_{\vec{R}_{i}} E_{\mathrm{DFT}}}_{= \, \vec{F}_{i}} \\ m_{\psi} | \ddot{\psi}_{n} \rangle &= - \underbrace{ \frac{1}{f_{n}} \frac{\delta E_{\mathrm{DFT}}}{\delta \langle \psi_{n} |} }_{= \, \hat{H} | \psi_{n} \rangle } + \underbrace{ \sum_{m} | \psi_{m} \rangle \frac{\Lambda_{n,m}}{f_{n}} }_{\tilde \, | \psi_{n} \rangle \epsilon_{n}} \end{align} where $\ddot{\psi}_{n} = \frac{\mathrm{d}^{2} \psi_{n}}{\mathrm{d}t^{2}}$ and $\ddot{\vec{R}}_{i} = \frac{\mathrm{d}^{2} \vec{R}_{i}}{\mathrm{d}t^{2}}$ are the second derivatives with respect to time, $\hat{H}$ is the Kohn-Sham hamiltonian, $\epsilon_{n}$ is $n^{\mathrm{th}}$ Kohn-Sham eigenvalue and $\vec{F}_{i}$ is the force acting on the $i^{\mathrm{th}}$ nucleus. The first set of equations is just Newton's equations of motion for the nuclei moving under the forces derived from $E_{\mathrm{DFT}}$. The stationary solution of the second set of equations is equivalent to the Kohn-Sham equations, since for a steady state all time derivatives vanish so that: \begin{align} \hat{H} | \psi_{n} \rangle &= \sum_{m} | \psi_{m} \rangle \frac{\Lambda_{n,m}}{f_{n}} \end{align} Constructing a matrix $\mathbf{\Lambda}$ from the Lagrangian multipliers $\Lambda_{n,m}$ shows that $\mathbf{\Lambda}$ is proportional to the transpose of the matrix representation of $\hat{H}$, i.e. $\Lambda_{n,m} \propto H_{m,n}$. Diagonalizing $\mathbf{\Lambda}$ leads to the eigenvalues $\epsilon_{n}$ of the Kohn-Sham equations. So, this Lagrangian does not lead to the time-dependent Schroedinger equation for the electronic wave functions. Rather it creates a dynamic that (in the ideal case) keeps the electrons always in their ground state.
{ "domain": "chemistry.stackexchange", "id": 2032, "tags": "quantum-chemistry, computational-chemistry" }
Why are Recursive Enumerable Languages closed under union?
Question: Union of two REL is closed under union. I don't understand how is it closed. I followed this link. The have stated: Here the trick is to simulate both M1 and M2 “simultaneously”. In other words, we design a machine that executes one step of M1, followed by one step of M2, then again one step of M1 and so on. My question is if the word doesn't belong to either of the machines then how did they decide that the output will be closed? Both the machines can be in an infinite loop and we cannot decide if the word belongs to the language accepted by M1 of M2. What am I missing here? Also, is there any relation between halting and closure properties of turning recognizable languages? Answer: "Closed under union" means that if $L_1,L_2\in RE$, then $L_1\cup L_2\in RE$. The machine you described will indeed accept a word $w$ if $w\in L_1\cup L_2$. If $w\notin L_1\cup L_2$, then machine might not halt. But that is fine, since we only want to show that the language is in $RE$, not $R$.
{ "domain": "cs.stackexchange", "id": 18841, "tags": "turing-machines, closure-properties" }
Why does Travelling Salesman Problem pose the restriction that each vertex can only be visited once?
Question: According to the wiki page of TSP as a graph problem, It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once Then what if a triangle is given, with two edges weighed $1$ and the third weighed $100$,obviously, the shortest path to visit all three vertices are passing the two $1$-weighed edges twice. So what's the meaning of such a restriction? Answer: This mostly comes down to what makes an interesting problem, and what can be easily analyzed. The restriction of visiting each vertex once is common in these sorts of problems. A traversal that visits each vertex no more than once is called a "path", and these are well-studied in graph theory, so there are quite a few existing theorems based on them. The version where you can visit vertices more than once can be readily reduced to the version where you can't (and in poly-time): for every pair of nodes, calculate the shortest distance between them, remove the edge directly between them (if any), and add an edge between them with weight equal to that shortest distance. (In your example graph, this would replace the 100 edge with a 2.) So your modified version both doesn't play into existing graph theory problems, and can be easily reduced to the standard one. It's easier just to analyze the standard one instead. TL;DR: It's because of precedent in graph analysis. See also the Hamiltonian cycle problem, which is very closely related, and the longest path, which is a bit more distantly related.
{ "domain": "cs.stackexchange", "id": 11735, "tags": "traveling-salesman" }
Got permission denied while trying to connect to Docker Daemon
Question: I have installed the docker using the docker installation page. After installing the docker, I have tried the Case 1 and clone the docker repo. When I ran the command './run.sh', I am getting the following error. Please help. Using options: ROS distro: melodic Image name: autoware/autoware Tag prefix: latest Cuda support: on Pre-release version: off UID: <1000> Launching autoware/autoware:latest-melodic-cuda docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.40/containers/create: dial unix /var/run/docker.sock: connect: permission denied. See 'docker run --help'. Originally posted by omkard on ROS Answers with karma: 64 on 2020-02-13 Post score: 0 Answer: @dynamicsoorya have you added your user to the docker group as mentioned here? Otherwise it looks like you'll need to modify the run.sh file to call sudo docker instead of just docker Originally posted by sgermanserrano with karma: 166 on 2020-02-14 This answer was ACCEPTED on the original site Post score: 1 Original comments Comment by omkard on 2020-02-18: @sgermanserrano Yes, that definitely worked. Thank you very much for your help. I have added my user to the docker this way: sudo usermod -aG docker your-user And that worked. Thanks a lot!
{ "domain": "robotics.stackexchange", "id": 34435, "tags": "ros, docker" }
Any use case of communication of nodes on different netowkrs?
Question: I want to know if there are use cases that nodes on different(distributed) networks communicate with each other, using single ROS Master. It's like: PC A on Network A: ROS Master and node A(A1, A2...) PC B on Network B: Node B(B1, B2...) (PC C on network C: Node C) I heard Rapyuta was true of this case. If you know something about this, let me know! Originally posted by Lain Iwakura on ROS Answers with karma: 1 on 2016-01-17 Post score: 0 Original comments Comment by Javier V. Gómez on 2016-01-27: Rapyuta is not exactly what you mention as it is not using a single ROS master. However, it might still be useful for your case: http://rapyuta.org/ Answer: ROS can communicate across networks as long as they are routeable. Many users use a VPN to connect robots across private networks. Originally posted by tfoote with karma: 58457 on 2017-02-15 This answer was ACCEPTED on the original site Post score: 1
{ "domain": "robotics.stackexchange", "id": 23457, "tags": "ros, master" }
Is it wise to use RepeatMasker on prokaryotes?
Question: I'm looking for a way to identify low complexity regions and other repeats in the genome of Escherichia coli. I found that RepeatMasker may be used for example when drafting genomes of prokaryotes (E. coli example). But RepeatMasker works on a limited dataset of species, neither of them being prokaryotes. By default, when running RepeatMasker, if no species is specified, it will compare with homo sapiens data. This seems rather inadequate, but the most relevent alternative, PRAP, requires a "dead" tool (VisCoSe, by Michael Spitzer). Is it still wise to to use RepeatMasker on Escherichia coli? If yes, which settings would maximise relevance ? Answer: If I understood correctly your question, you want to mask those regions in a (FASTA?) genome. I think you could identify those regions using mummer and mask them using bedtools. # align genome against itself nucmer --maxmatch --nosimplify genome.fasta genome.fasta # select repeats and convert the corrdinates to bed format show-coords -r -T -H out.delta | awk '{if ($1 != $3 && $2 != $4) print $0}' | awk '{print $8"\t"$1"\t"$2}' > repeats.bed # mask those bases with bedtools bedtools maskfasta -fi genome.fasta -bed repeats.bed -fo masked.fasta Have a look at nucmer and bedtools maskfasta options to fine-tune your analysis.
{ "domain": "bioinformatics.stackexchange", "id": 223, "tags": "genome, repeat-elements, sequence-analysis" }
Chiral symmetry in the SSH model
Question: According to "A short course on topological insulators", chapter 1, in the SSH model, the consequence of chiral symmetry for the states with $E\ne 0$ is the presence of another state with $-E$. The orthogonality of the wave functions corresponding to $E>0$ and $E<0$ i.e., $\langle\psi_{E>0}|\psi_{E<0}\rangle=0$ gives rise to the identical support of the two wave functions on the two sublattices. Namely, for both energies $E>0$ and $E<0$: $$\langle\psi_{E}|P_A|\psi_{E}\rangle=\langle\psi_{E}|P_B|\psi_{E}\rangle$$ where $P_A$ and $P_B$ are projectors on sublattices A and B. Also, for $E=0$, we can choose the two states in such a way that one of them is supported by the sublattice A and the other with B. However, in this case, the orthogonality of the two states again results in equal support on the two sublattices. It seems a paradox! Any help would be appreciated. Answer: Suppose you have $\psi_{A}$ and $\psi_{B}$ at zero energy, with each supported on a different sublattice. Then $P_{A}\psi_{A}=\psi_{A}$ and $P_{B}\psi_{B}=\psi_{B}$ and $P_{B}\psi_{A}=P_{A}\psi_{B}=0$. These are orthogonal, and also at zero energy so $H\psi_{A}=H\psi_{B}=0$. Now take a new pair, $$ \psi_{1}=\frac{1}{\sqrt{2}}\psi_{A}+\frac{1}{\sqrt{2}}\psi_{B} $$ $$ \psi_{2}=\frac{1}{\sqrt{2}}\psi_{A}-\frac{1}{\sqrt{2}}\psi_{B} $$ This is again an orthogonal pair of unit vectors, and $H\psi_{1}=H\psi_{2}=0$, but now \begin{align*} \left\langle \psi_{1},P_{A}\psi_{1}\right\rangle & =\frac{1}{2}\left\langle \psi_{A},P_{A}\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{B},P_{A}\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{A},P_{A}\psi_{B}\right\rangle +\frac{1}{2}\left\langle \psi_{B},P_{A}\psi_{B}\right\rangle \\ & =\frac{1}{2}\left\langle \psi_{A},\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{B},\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{A},0\right\rangle +\frac{1}{2}\left\langle \psi_{B},0\right\rangle \\ & =\frac{1}{2} \end{align*} and \begin{align*} \left\langle \psi_{1},P_{B}\psi_{1}\right\rangle & =\frac{1}{2}\left\langle \psi_{A},P_{B}\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{B},P_{B}\psi_{A}\right\rangle +\frac{1}{2}\left\langle \psi_{A},P_{B}\psi_{B}\right\rangle +\frac{1}{2}\left\langle \psi_{B},P_{B}\psi_{B}\right\rangle \\ & =\frac{1}{2}\left\langle \psi_{A},0\right\rangle +\frac{1}{2}\left\langle \psi_{B},0\right\rangle +\frac{1}{2}\left\langle \psi_{A},\psi_{B}\right\rangle +\frac{1}{2}\left\langle \psi_{B},\psi_{B}\right\rangle \\ & =\frac{1}{2} \end{align*} and thus $$ \left\langle \psi_{1},P_{A}\psi_{1}\right\rangle =\left\langle \psi_{1},P_{B}\psi_{1}\right\rangle . $$ Just a few more minus signs needed for the same for $\psi_{2}$ . Sorry for the math notation. It is how I think.
{ "domain": "physics.stackexchange", "id": 81403, "tags": "condensed-matter, symmetry, topological-insulators" }
What causes highly compacted matter to collapse into a black hole?
Question: "The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole". Why does this happen? I mean, why something finite (like the core of a very masive star) should lead to an infinitly small point (singularity). For me it's like spacetime has a "wekness" and it breaks if a region contains to much mass/energy. I imagined that the matter will just be very very compacted, but not infinitly compacted and the spacetime bent accordingly. Neutron stars are also very compacted and they bend a lot the space time. I don't understand why if we continue to compress the matter even more (and I don't say "compress it to an infinitly small point"), suddenly the black hole is created. Why is this happening? Answer: The further you compress a body of matter (making it denser), the greater the surface gravity becomes- and gravity is a distortion of spacetime. As the surface gravity goes up, the spacetime distortion becomes more extreme and the escape velocity for a projectile fired upwards from that surface gets bigger and bigger. At some point, the spacetime distortion is so great that the escape velocity equals the speed of light and this is when it is usually said that a black hole forms. For very large things like stars, the compression is performed by gravity. As to what is actually going on inside a black hole as it forms, we have mathematical models of the process but no way to extract data from a real black hole to determine if our models are correct or not. Note also that what we can see and learn about a black hole depends on whether we are observing it from outside or whether we are freely falling into it. This greatly complicates the problem of understanding what sort of thing a black hole really is.
{ "domain": "physics.stackexchange", "id": 68419, "tags": "black-holes, astrophysics, stellar-evolution" }
C# and unity. Moving an object to a target and destroy target once we're close
Question: This is my first ever c# script. I have played around a little and I think this is some proper c# code. Since it's my very first c# ever, I'm assuming there's a few mistakes, things that could be better, bad conventions and such. If you see any, please let me know so I can improve! using System.Collections; using System; using System.Collections.Generic; using UnityEngine; using UnityEngine.AI; public class Mover : MonoBehaviour { [SerializeField] GameObject target; private GameObject player; // creating a variable for the player object void Update() { if (target != null) { // if the target is already destroyed, we don't have to do anything anymore player = this.gameObject; // the script is attached to the player // Here I expected I could get the NavMeshAgent component via something like player.NavMashAgant. // I do the same later for target.transform. // Is there a reason this component is not accessible via dot notation? GetComponent<NavMeshAgent>().destination = target.transform.position; // set the players destination as the targets potion. It starts moving automatically. if (AreClose(player, target)) { Destroy(target); } } } private static double Distance(GameObject a, GameObject b) { Vector3 aPos = a.transform.position; Vector3 bPos = b.transform.position; // A really long line.. What is the convention to make this more readable? return Math.Sqrt(Math.Pow(aPos.x - bPos.x, 2) + Math.Pow(aPos.y - bPos.y, 2) + Math.Pow(aPos.z - bPos.z, 2)); } private static bool AreClose(GameObject a, GameObject b) { double result = Distance(a, b); if (result < 2.5) // random magic number for now { return true; } else { return false; } } } Answer: Welcome to Code Review! I'm not too familiar with Unity, so I can't really comment or offer advice on Unity-specific things. There are still a few places where I feel like your code could be improved however. Whitespace While I generally prefer to keep my code not too dense, I recommend never having more than one blank line in a row (except maybe if you want to separate a block of imports or usings from the rest of the code); some editors will even automatically remove any consecutive blank lines when you format your document. Similarly, that blank line at the end of your Distance() method doesn't serve any purpose, so I would just remove it, or maybe move it right below your aPos and bPos declarations. AreClose() method Your AreClose() method could be rewritten in a much shorter and cleaner way, like so : private static bool AreClose(GameObject a, GameObject b) { double result = Distance(a, b); return (result < 2.5); // random magic number for now } That's because the expression result < 2.5 itself evaluates to a boolean, so there's no need to check whether its value true or false, you can simply return it. You could further shorten it to... private static bool AreClose(GameObject a, GameObject b) { return Distance(a, b) < 2.5; // random magic number for now } ...since, in my opinion, the variable result doesn't add anything to the readability or the clarity of the code. Seeing Distance(a, b) < someMagicNumber is clear enough on its own. You could go even further and take advantage of C#'s expression body definitions, like so private static bool AreClose(GameObject a, GameObject b) => Distance(a, b) < 2.5; // random magic number for now ...which doesn't look too great here because of that comment. I would recommend you put that magic number into a const variable, either declared inside AreClose() or as a class variable. This will make your code easier to modify in the future, and also allows you to give it a meaningful name like MIN_DISTANCE. private static bool AreClose(GameObject a, GameObject b) => Distance(a, b) < MIN_DISTANCE; Distance() method Like I said earlier, I'm not too familiar with Unity, but I'm pretty sure it has a built-in method for computing the distance between two vectors, like so Vector3.Distance(aPos, bPos). There's nothing inherently wrong with doing it yourself if you're just trying to learn and get familiar with a new language or concept, but if you're looking for optimal performance or accuracy, I would recommend making sure there isn't already a function that does what you need before trying to reinvent the wheel -although in this specific case it's probably not too important. Additionally, I would avoid using Math.Pow() when you only want to square a number, as it's quite an expensive operation compared to simply doing (aPos.x - bPos.x) * (aPos.x - bPos.x). This is because Math.Pow() needs to be able to handle non-integer exponents and (probably) uses Taylor series (https://en.wikipedia.org/wiki/Taylor_series) to compute powers for arbitrary exponents, even if your exponent happens to be 2. So you can just compute the square yourself, which should significantly improve performance. You could also create a DistanceSquared() method, and compare it with MIN_DISTANCE * MIN_DISTANCE, which is equivalent to comparing Distance() and MIN_DISTANCE. This allows you to avoid using Math.Sqrt(), which is, again, a rather slow operation. Then you can rewrite Distance() like this, if you still want it : public static double Distance(GameObject a, GameObject b) => Math.Sqrt(DistanceSquared(a, b));
{ "domain": "codereview.stackexchange", "id": 39125, "tags": "c#, unity3d" }
transformation from map to laser
Question: Hi, I have a robot system which has frame odom and laser. tf publish the transfermation from odom to laser. I am building my own localization node which publish the frame "map" and the transformation from map to odom. When I use rviz, if I select odom as fixed frame, I can see the laser data. If I select "map" as fixed frame, rviz complains that there is no transformation from map to laser. I thought since I defined the transformation from map to odom, and the existing system has the transformation from odom to laser, rviz should be able to figure out the transformation from map to laser. But this does not seem to be the case. What's wrong? Originally posted by AutoCar on ROS Answers with karma: 102 on 2018-11-21 Post score: 0 Original comments Comment by Andy West on 2018-11-21: Try rosrun tf view_frames which will produce a graph showing the connections between the TF frames. Use evince frames.pdf to view the graph. Are all the frames joined up as you expect? Comment by AutoCar on 2018-11-21: I can see map->odom->base_link->base_footprint. But laser it not there. Comment by AutoCar on 2018-11-21: Please note that the transformation from odom to base_link is inside tf. However, the transformation from base_link to Laser is described in tf_static. Answer: I found the root cause was the time when running bag files. I need to set use_sim_time to be true. fine. Originally posted by AutoCar with karma: 102 on 2018-11-24 This answer was ACCEPTED on the original site Post score: 0
{ "domain": "robotics.stackexchange", "id": 32081, "tags": "navigation, odometry, mapping, ros-kinetic" }
Linked allocation in operating systems: why not use a doubly linked list?
Question: I'm reading here that linked allocation uses a linked list, where each block has a pointer to the next block. For sequential access, this would take moving through the blocks one by one. Would it not better to use a doubly linked list so that this time could be cut in half? Algorithmic analysis I understand that sequential access remains ( O(n) ) worst case time for both singly linked list and doubly linked list. But non-asymptotically the time spent does become half. Answer: Singly and doubly linked lists have the identical (terrible) linear access time. Using doubly linked lists would not help performance. The only benefit you'd get is that reading file backwards is more efficient. No modern file system uses linked allocation anymore. Most use ranges of consecutive blocks, so called extents. Extents are usually more space-efficient and support random access. Neither Singly or doubly linked lists support random access. In practice these days the metadata for a file is read in at once (with all its extents), this is fast. Reading all nodes of the linked lists would mean reading several blocks, one after the other. This is slow, obviously.
{ "domain": "cs.stackexchange", "id": 17639, "tags": "filesystems" }
Performing a "mean blur"
Question: I perform a mean blur on a number[][]-array (called grid in the code). I iterate the array, and calculate average values from adjacent "cells". The kernel size defines how many adjacent values are used for average calculation. /** * Performs a mean blur with a given kernel size. * @param {number[][]} grid - Data grid holding values to blur. * @param {number} kernelSize - Describes the size of the "blur"-cut for each cell. */ static MeanBlur(grid, kernelSize) { // Result let newGrid = []; // Iterate original grid for (let row = 0; row < grid.length; row++) { let newRow = []; for (let col = 0; col < grid[row].length; col++) { let adjacentValues = []; // Get all adjacent values for (let i = 1; i <= kernelSize; i++) { adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid)); } // Calculate average value const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length; // add average value to the current row newRow.push(average); } newGrid.push(newRow); } return newGrid; } /** * Return all adjacent values of a cell in a `number[][]`-array. * The amount of adjacent value depends on the kernel size. * @param {number} row - Describes the cell's row position. * @param {number} col - Describes the cell's column position. * @param {number} kernelSize - The kernel size. * @param {number[][]} grid - Original data grid. */ static _getAdjacentValues(row, col, kernelSize, grid) { if (kernelSize < 1) { throw "Kernel size value should be at least 1."; } let adjacentValues = []; // north if (row - kernelSize >= 0) { adjacentValues.push(grid[row - kernelSize][col]); } // north-east if (row - kernelSize >= 0 && col + kernelSize < grid[row].length) { adjacentValues.push(grid[row - kernelSize][col + kernelSize]); } // east if (col + kernelSize < grid[row].length) { adjacentValues.push(grid[row][col + kernelSize]); } // south-east if (row + kernelSize < grid.length && col + kernelSize < grid[row].length) { adjacentValues.push(grid[row + kernelSize][col + kernelSize]); } // south if (row + kernelSize < grid.length) { adjacentValues.push(grid[row + kernelSize][col]); } // south-west if (row + kernelSize < grid.length && col - kernelSize >= 0) { adjacentValues.push(grid[row + kernelSize][col - kernelSize]); } // west if (col - kernelSize >= 0) { adjacentValues.push(grid[row][col - kernelSize]); } // north-west if (row - kernelSize >= 0 && col - kernelSize >= 0) { adjacentValues.push(grid[row - kernelSize][col - kernelSize]); } return adjacentValues; } What I am most concerned with is _getAdjacentValues. I think there has to be a more convenient way to get adjacent values without including values at implausible indices. What I mean is that i.e. at the grid position (0, 0) even with a kernel size of 1, values like (-1, -1) will be checked automatically by a loop. Edit: I was asked for clarification on how the kernel size actually is supposed to work. Assuming that I want to calculate the value for the current position (black cell). With a kernel size of 1 I would include all green cells for average value calculation. With a kernel size of 2 I would include all green and blue cell values. And with a kernel size of 3 all orange, blue and green cell values will be used for average calculation. Answer: Convolution Filter In CG this type of processing is call a convolution filter and there are many strategies used to handle edges As the previous answer points out for performance you are best to use typed arrays and avoid creating arrays for each cell you process. In your example for (let i = 1; i <= kernelSize; i++) { adjacentValues = adjacentValues.concat(this._getAdjacentValues(row, col, i, grid)); } const average = adjacentValues.reduce((a, b) => a + b, 0) / adjacentValues.length; You can do the reduce in the loop you create the sub array in, and avoid the 9 new arrays you create and the need to iterate again. Example substitute value at edges. const edgeVal = 0; var val, sum = 0; for (let i = 0; i < 9; i++) { val = grid[row + (i / 3 | 0) - 1]; if (val !== undefined) { val = val[col + (i % 3) - 1]; sum += val !== undefined ? val : edgeVal; } else { sum += edgeVal; } } const average = sum / 9; Or ignore edges var val; var sum = 0; var count = 0; for (let i = 0; i < 9; i++) { val = grid[row + (i / 3 | 0) - 1]; if (val !== undefined) { val = val[col + (i % 3) - 1]; if (val !== undefined) { sum += val; count ++; } } } const average = sum / count; Better yet, work on flattened arrays to avoid the double indexing for 2D arrays, and create the result array at the start rather than pushing to it each cell. Workers Convolution filters are ideally suited for parallel processing with an almost n / p performance increase (n is number of cells, p is number of processing nodes). Using web workers on a i7 4 core (8 virtual cores) can give you a 8 times performance increase (On chrome (maybe others) you can get a core count at window.clientInformation.hardwareConcurrency) Attempting to use more workers than available cores will result in slower processing. GPU For the ultimate performance the GPU, via webGL is available and will provide realtime processing of very large data sets (16M and above), A common non CG use is processing layers in convolutional neural network. An example of a convolution filter (About halfway down the page) using webGL can easily be adapted to most data types, however doubles will only be available on a small number of hardware setups.
{ "domain": "codereview.stackexchange", "id": 32033, "tags": "javascript, array, signal-processing" }
Translated publications of Boltzmann
Question: I have been looking for Boltzmann's papers (in english) and had no luck. Anyone knows if they were translated at the first place, and if yes where to find them? Answer: Four of Boltzmann's prominent papers are translated into English on pages Ludwig Boltzmann: Further Studies on the Thermal Equilibrium of Gas Molecules (from Sitzungsberichte der kaiserlichen Akademie der Wissenschaften, Vienna, 1872) & Ludwig Boltzmann: On the Relation of a General Mechanical Theorem to the Second Law of Thermodynamics (from Sitzungsberichte der kaiserlichen Akademie der Wissenschaften, Vienna, 1877) & Ludwig Boltzmann; Reply to Zermelo's Remarks on the Theory of Heat (from Annalen der Physik, 1896) & Ludwig Boltzmann: On Zermelo's Paper "On the Mechanical Explanation of Irreversible Processes" (from Annalen der Physik, 1897) of Brush, Stephen G, and Nancy S Hall. The Kinetic Theory of Gases: An Anthology of Classic Papers with Historical Commentary. London; River Edge, NJ: Imperial College Press ; Distributed by World Scientific Pub., 2003.
{ "domain": "physics.stackexchange", "id": 27614, "tags": "thermodynamics, statistical-mechanics, specific-reference" }
Small nucleus emission from a larger nucleus
Question: Like alpha decay, is there the possibility of a small (n,z) nucleus coming out of a large (N,Z) nucleus? Why lithium and beryllium don't decay out of big nuclei as helium does ? Answer: Yes, this is possible. It is the case of $^{223}\textrm{Ra}$ for instance, which can decay through an $\alpha$ process with a lifetime of $\sim$ 11 days, but also through the emission of a $^{14}\textrm{C}$ nucleus. However, this decay mode is extremely disfavored (branching ratio $\sim 10^{-9}$). There are two factors at play here. One is energetic, because the height of the energy barrier sets the amplitude of the tunneling process. Another factor is that it is much less likely for a nucleus larger than Helium to form in the large nucleus and escape. This increases the lifetime for this mode by several orders of magnitude. References [1] Lund/LBNL Nuclear Data Search, http://nucleardata.nuclear.lu.se/toi/nuclide.asp?iZA=880223 [2] Introductory nuclear physics, Kenneth Krane, section 8.4, "theory of $\alpha$ emission".
{ "domain": "physics.stackexchange", "id": 35577, "tags": "nuclear-physics, radiation" }
Range sensor does not detect objects in gazebo
Question: Hi, I am new to ROS and try to create an urdf file to represent my robot in rviz and gazebo, but I can't get the range sensor to detect objects in the simulator. The message at /arduino/sensor/ir_left always reports 3.75 as range, which is the maximum. For test, I added a gpu_ray scanner and that shows the object just fine in rviz. Spinning the robot in place doesn't help either, so it's doesn't seem to be the sensor angle. What's wrong? Or any suggestion on how to debug this further? The sensor definition is below. Thanks, Joep <gazebo reference="ir_left"> <sensor type="ray" name="ir_leftabc"> <pose>0 0 0 0 0 0</pose> <update_rate>50</update_rate> <ray> <scan> <horizontal> <samples>1</samples> <resolution>1.0</resolution> <min_angle>-0.01</min_angle> <max_angle>0.01</max_angle> </horizontal> <vertical> <samples>1</samples> <resolution>1</resolution> <min_angle>-0.01</min_angle> <max_angle>0.01</max_angle> </vertical> </scan> <range> <min>0.01</min> <max>3.75</max> <resolution>0.02</resolution> </range> </ray> <plugin filename="libgazebo_ros_range.so" name="gazebo_ros_range"> <gaussianNoise>0.005</gaussianNoise> <alwaysOn>true</alwaysOn> <updateRate>5</updateRate> <topicName>/arduino/sensor/ir_left</topicName> <frameName>ir_left</frameName> <visualize>true</visualize> <radiation>infrared</radiation> <fov>0.02</fov> </plugin> </sensor> </gazebo> Originally posted by Joep on ROS Answers with karma: 1 on 2018-05-20 Post score: 0 Original comments Comment by l4ncelot on 2018-05-21: If you want to use GPU laser plugin the filename of this plugin is called libgazebo_ros_gpu_laser.so IMO instead of libgazebo_ros_range.so. The library should be saved in /opt/ros/kinetic/lib location. Try to run gazebo in verbose mode to see if the plugin is loaded or not. Comment by l4ncelot on 2018-05-21: Btw this is ROS related forum. For the future post Gazebo related questions here. Comment by l4ncelot on 2018-05-21: Also the type of sensor should be gpu_ray instead of ray. Comment by Joep on 2018-05-21: @i4ncelot - thanks for your reply! I'm trying to create a sensor_msgs/Range topic, not laserscan. Laserscan works okay. Tried to check the gazebo log at the location shown in the launch window. There are a few logfiles there, but none with gazebo in the name or with 'laser' or 'ray' in it. Comment by l4ncelot on 2018-05-22: My bad, I thought you wanted to use the gpu_ray sensor. I've tried your model and it works for me just fine when I subscribe to /arduino/sensor/ir_left topic. Can you edit your question with the launch file you use? Have you tried to launch gazebo with verbose option? Answer: Hi Joep, I think I found what happens to your range ray. The configuration is correct but it is very likely that due to the way that you have created your robot, the range is colliding against itself. Either it is colliding against the collision mesh of the sensor itself or either against the body of the robot. I cannot say which one because you did not publish the whole URDF. Anyway, I created a video about how to detect that. Check it out here: https://youtu.be/Qq4Xsl-n3nM Originally posted by R. Tellez with karma: 874 on 2018-05-22 This answer was ACCEPTED on the original site Post score: 2 Original comments Comment by l4ncelot on 2018-05-22: Wouldn't the sensor report something like ~0.05 range when colliding with itself as shown in your video? Comment by l4ncelot on 2018-05-22: Oh... if he has the collision box around the sensor greater than the max range it will return exactly the max range. I see it now. Comment by R. Tellez on 2018-05-22: Yes l4ncelot. You can see at minute 6:24 of the video how the rostopic echo is providing exactly that value, reporting the collision with itself Comment by Joep on 2018-05-22: Hi Ricardo, thank you so much for your video. Very informative. Add it shows my code could work! However this is not the same issue. My sensor always reports 3.75 as range, which is the maximum. Is your robot model available for download? Would love to find out the differences with mine. Comment by R. Tellez on 2018-05-22: The simulation is of the ROS Live Class that we do every Tuesday 18:00CEST online (open to anyone https://youtu.be/ajgwQ_lGFL4). In order to get it, create a free account at rds.theconstructsim.com and afterwards, click this link: https://goo.gl/rAovtd That will copy my project to your RDS account.
{ "domain": "robotics.stackexchange", "id": 30864, "tags": "ros, gazebo, ros-kinetic, sensor, range" }
How does an energy input actually change a molecule?
Question: I'm reading an online biochemistry textbook for fun, and a question that has bothered me on chemistry before popped up again. The question here is not specific for the chemical process described, but I needed an example: By adding energy to a molecule of formic acid ($\ce{CH2O2}$), it kicks out an O and becomes formaldehyde ($\ce{CH2O}$). More energy makes it methanol ($\ce{CH3OH}$), and more energy makes that methane ($\ce{CH4}$). I get that energy allows the molecule to become a different molecule of a higher energy state; it essentially stores the energy as new or altered chemical bonds. What I never understood is why? It is natural for the molecule to try to go to a lower energy state, releasing energy in the bonds, as it does under oxidation/"burning". So why doesn't it just immediately release the added energy, rather than store it in bonds? What prevents the formaldehyde from saying "I see your energy, but I'll just do what's easier for me and release some of my own, and become formic acid"? After all, no matter how much energy I use to blast a block of stone into the air, it's going to fall down again, unless someone up there catches it. What's "catching" the higher-energy molecule and preventing it from "falling back" into the more stable, lower-energy molecule? The answer seems like it might be leaning on atomic theory more than chemistry, but I thought I'd give it a shot here. Answer: It is about transition energy barriers. Imagine a mountainous landscape with lots of valleys of different depths. Stable molecules are large balls than sit in the bottoms of the valleys. Add some energy and the ball might rise up the valley floor but it will fall back. Add enough energy and the ball can rise high enough to get over the barrier to the next valley and will fall into a different place with a different depth (and the analogy falls down as it also becomes a different ball). So, whatever the relative depths of the valleys, there is a big barrier to getting from one valley to another. Even when one valley is much lower than another (so moving from one to another releases energy) you still have to put energy in to cause the transition. Chemical reactions are often controlled by those transition barriers between one molecule and another rather than by the absolute energy difference between the molecules. Methane and oxygen will burn to give various compounds and will release energy, but there is a big barrier to starting the burning process. This means that the deoxygenation reactions you describe are possible and won't spontaneously reverse even though they would release energy if they did.
{ "domain": "chemistry.stackexchange", "id": 5431, "tags": "kinetics, energy" }
Delta pulse approximation
Question: Let us suppose we have an harmonic oscillator hamiltonian, with a controlling term $xu(t)$: \begin{equation} H(t)=\frac{1}{2}m\omega^2 x^2 - u(t)x \end{equation} As stated in "A. G. Butkovskiy and Y. I. Samoilenko, Control of Quantum-Mechanical Processes and Systems" an optimal $u(t)$ in order to maximize the probability transition from e.g. $|0\rangle$ to $|1\rangle$ must satisfy the condition: \begin{equation} \left|\int_0^Tu(t)e^{-it}dt\right|=\sqrt{2}\end{equation} An example of $u(t)$ is a $\sqrt{2}\delta(t)$ pulse. How can one generate a $\delta(t)$ pulse in, for example, a simulation? Which approximation should I use? Answer: A gaussian should work as long as the area under it is unity and it is as short as possible in the simulation. A rectangular pulse pulse of height w and width 1/w will also work with w made as large as possible. Both of these pulses converge to a delta function as their height goes to infinity.
{ "domain": "physics.stackexchange", "id": 70697, "tags": "quantum-mechanics, simulations, perturbation-theory" }
What is the reaction mechanism of a selective catalytic reaction with urea?
Question: What is the reaction mechanism of the following reaction: $$\ce{4NO + 2(NH2)2CO + O2 -> 4N2 + 4H2O + 2CO2}$$ I expect that the catalyst is vanadium oxide and titanium oxide. Where can I find a detailed description of the reaction mechanism? Answer: After researching this topic, I've found the mechanism to be:
{ "domain": "chemistry.stackexchange", "id": 9672, "tags": "organic-chemistry, inorganic-chemistry, reaction-mechanism, reference-request" }
Language model with grammar for Pocketsphinx
Question: Hi Does anyone know if there's a way to include grammar in the language model for Pocketsphinx? Either by using a jsgf-file or something else. Any ideas would be appreciated. Originally posted by Kinna on ROS Answers with karma: 35 on 2012-12-02 Post score: 0 Answer: The Pocketsphinx ROS package is based on gstreamer's pocketsphinx plugin. As far as I know, this does not accept the jsgf-files. This question has come up a few times, but as far as I know, nobody has found a way to make it work. Originally posted by fergs with karma: 13902 on 2012-12-02 This answer was ACCEPTED on the original site Post score: 2
{ "domain": "robotics.stackexchange", "id": 11955, "tags": "ros, pocketsphinx" }
bridging the connection from the Helmholtz free energy in classical thermo to stat mech
Question: The Helmholtz-free energy from classical thermo is defined as $$\text{F=u-TS}$$ taking the differential and algebraic manipulation, we arrive at $$\text{dF=-pdv-sdT}$$ Observe that: $$\text{p=-(}\frac{\text{$\delta $F}\backslash }{\text{$\delta $v}})_T$$ and $$\text{S=-(}\frac{\text{$\delta $F}\backslash }{\text{$\delta $T}})_V$$ I am perfectly comfortable with the derivation up till here. In my text, it is mentioned that $$\text{F=-}K_B\text{TLog[z]}$$ but this is not at all obvious from which this expression came from. Would someone be nice to clear things up? Answer: There are multiple ways to justify this identification. The first one is to recognize that in thermodynamics the Helmholtz free energy is the right thermodynamic potential to figure out spontaneous evolution of your thermodynamic system (in virtue of the second principle of thermodynamics) at fixed (N,V,T). In statistical mechanics, this is expressed by the canonical ensemble and the corresponding statistics associated to it. If you were to have a mesoscale-observable $X$ about which you want to know the probability distribution for $X$ to have value $x$, then you will compute \begin{equation} p(x|N,V,T) = \frac{\sum_{\{ i; \:X(i) = x\}} e^{-\beta E_i}}{e^{-\beta F(N,V,T)}} \end{equation} now, the sum at the numerator can be thought as a partial partition function (with for instance an additional potential that biases all the micro states $i$ so that $X(i) =x$). We can then define some kind of partial free energy $F(x|N,V,T)$ such that \begin{equation} p(x|N,V,T) = \frac{e^{-\beta F(x|N,V,T)}}{e^{-\beta F(N,V,T)}} \end{equation} We then see that the most probable value of the meso-observable $X$ to be observed is the one that will minimize the "partial" free energy at fixed (N,V,T); thus the logarithm of the partition function plays a role equivalent to that of the Helmholtz free energy in usual thermodynamics when the second principle is here replaced by "most probable value". The second one is more direct and more formal. It consists in noticing that by definition \begin{equation} \langle E \rangle = \sum_i E_i \frac{e^{-\beta E_i}}{Z(N,V,T)} \end{equation} and then write the tautology $E_i = -k_b T \ln e^{-\beta E_i} = -k_b T \ln \left( e^{-\beta E_i} \frac{Z(N,V,T)}{Z(N,V,T)} \right) = -k_B T \ln p(i|N,V,T) -k_B T \ln Z(N,V,T)$. Upon replacing in the above formula we get: \begin{eqnarray} \langle E \rangle &=& -k_B T \sum_i p(i|N,V,T) [\ln p(i|N,V,T)+ \ln Z(N,V,T)] \end{eqnarray} Now, defining the entropy as being the Shannon entropy of the distribution i.e \begin{equation} S(N,V,T) = -k_B \sum_i p(i|N,V,T) \ln p(i|N,V,T) \end{equation} we get \begin{equation} -k_B T \ln Z(N,V,T) = \langle E \rangle - TS(N,V,T) \end{equation} By comparison with the definition of the Helmholtz free energy, it comes naturally that $F = -k_BT \ln Z$. The third and last one for this answer consists in using the role of generating function of the cumulants played by the logarithm of the partition function (that I discussed quite in detail here) to retrieve partial derivatives which bear a thermodynamical significance (like internal energy, chemical potential and os forth) and then again make a strong parallel with the Helmholtz free energy.
{ "domain": "physics.stackexchange", "id": 21887, "tags": "statistical-mechanics" }
Electrostatic field-Field lines relationship
Question: How is the $\frac{1}{r^2}$ dependence of the electric field intensity due to a stationary point charge consistent with the concept of field lines? Answer: I've solved the problem using the concept of solid angle. In the figure, the charge q is at the origin and produces an electric field E in the surrounding space. To understand the dependence of the electric field lines on the area, or rather on the solid angle subtended by the area element, we must try to relate the area with the solid angle ∆Ω. We know that in a given solid angle the number of radial field lines is the same. For two points P1 and P2 at distances r1 and r2 from the charge, the element of area subtending the solid angle ∆Ω is ∆Ωr_1^2 at P1 and ∆Ωr_2^2 at P2 respectively. The number of lines (say n) cutting these area elements are the same. The number of field lines cutting, unit area element is therefore n/(ΔΩr_1^2 ) at P1 and n/(ΔΩr_2^2 ) at P2 respectively, since n and ΔΩ are common, the strength of the field clearly has a 1/r^2 dependence. Hence, the 1/r^2 dependence of the electric field due to a stationary point charge consistent with the concept of electric field lines
{ "domain": "physics.stackexchange", "id": 56203, "tags": "electrostatics, electric-fields, gauss-law, vector-fields" }
Rotating dielectric cylinder in magnetic field?
Question: In this example we find that the polarization in a rotating cylinder is given by an iterative method (see section 2.2 in the link) using an effective $E$-field of $\vec E_0=\omega B \vec r/c$. Why are these iterations to find the final polarization necessary since (by definition of $\chi_e$) the final state of the polarization is: $$\vec P=\chi_E \vec E_0$$ Surly if you just applied an electric field to a dielectric sphere the above equation would hold, why doesn't it under the current situation? Answer: I think that your confusion is due to a subtlety associated with the solution approach of going to a rotating frame of reference. If you're just trying to find out the polarization of a stationary rectangular, block-shaped slab of dielectric material due to a uniform electric field, then you could simply write down an electric polarization equation similar to the one you wrote above and be finished. However, when you go to a rotating frame of reference to solve the current problem, you are considering the response of the electrons in the dielectric rotating cylinder not to a real E-field, but the response due to what you yourself acknowledged to be an "effective" or "pseudo" E-field of of $\vec E_0=ωB\vec r /c$. So you can't just stick this pseudo E-field of $E_o$ into your polarization equation and say that you're done because you forgot something: The real E-field. Initially, there is no real E-field. But when the cylinder starts rotating, the electrons start moving in response to the $\vec v \times\vec B$ "pseudo" E-field. But when they start moving and therefore start polarizing the dielectric medium of the cylinder then what happens? The polarization that the electrons produce generates an E-field of its own! So now you have both the "pseudo" E-field and a "real" E-field, so you have to re-consider and re-calculate the polarization of the medium in response to the sum of BOTH E-fields. But, wait, you're not done yet. When you re-calculate the polarization and find that it is different from your first solution, then that means that you have to go back and re-calculate the real E-field again. Do you see where this is leading? You have to iteratively re-calculate the polarization and the (real) E-field over and over again because they depend on each other. Again, this complication arises because you have a contribution to the polarization from a "pseudo" E-field that was produced by going to a rotating frame of reference. As your linked example says, it's a "chicken-and-egg" problem.
{ "domain": "physics.stackexchange", "id": 26176, "tags": "electromagnetism, dielectric" }
How to move turtlebot for a certain pre-defined distance
Question: Hello We would like to write a code that makes Turtlebot move linearly to a specified distance, for example 2.3 meters. We would like to know what will be the best strategy to implement this? One way that we thought about is to send a constant linear velocity on topic twist.twist.linear.x and stops after a certain time t such as t=distance/velocity. Is this a good approach? is there any other better strategy? Thanks Anis Originally posted by Anis on ROS Answers with karma: 253 on 2013-04-10 Post score: 0 Answer: One approach that is slightly better than timing is to use TF to see how far you've gone. One example I know of is using TF to measure angle in an older piece of code we used to draw a square on the ground using a TurtleBot. Doing a distance comparison should be even easier than angle comparison. In a straight line, the odometry of a Turtlebot shouldn't be too bad so TF should provide a more repeatable distance than timing (especially over longer distances at slower speeds). Originally posted by fergs with karma: 13902 on 2013-04-18 This answer was ACCEPTED on the original site Post score: 0 Original comments Comment by arpit901 on 2016-06-06: Broken link
{ "domain": "robotics.stackexchange", "id": 13759, "tags": "ros" }
Physics of the Internet?
Question: Is there a way to describe the Internet in terms of a physics theory, like how the atom is described by quantum mechanic? If there is, how is it described by this theory. Answer: First of all, you need to consider that the Internet is at one level a physical thing. It is made of a collection of servers, clients, routers, switches, and the like, all of which are typically metal and plastic boxes full of electronic components. Within these boxes of course you have the physics of semiconductors, power supplies, etc. These boxes are connected to each other by physical things also: metallic wires, glass and plastic fiber-optics, radio waves, microwaves. There you have the physics of electromagnetics, optics, etc. Let's not forget satellites, they're in there too! So we have the physics of orbits and relativity. There are trillions of physical components to the Internet, and they are each made of high-tech materials using modern science. So the literal answer to your question is that nearly every subfield of modern physics, except perhaps the most theoretical, are involved in making the Internet work.
{ "domain": "physics.stackexchange", "id": 2339, "tags": "information" }
What determines the "speed" of a programming language?
Question: Suppose a program was written in two distinct languages, let them be language X and language Y, if their compilers generate the same byte code, why I should use language X instead of the language Y? What defines that one language is faster than other? I ask this because often you see people say things like: "C is the fastest language, ATS is a language fast as C". I was seeking to understand the definition of "fast" for programming languages. Answer: There are many reasons that may be considered for choosing a language X over a language Y. Program readability, ease of programming, portability to many platforms, existence of good programming environments can be such reasons. However, I shall consider only the speed of execution as requested in the question. The question does not seem to consider, for example, the speed of development. Two languages can compile to the same bytecode, but it does not mean that the same code will be produced, Actually bytecode is only code for a specific virtual machine. It does have engineering advantages, but does not introduce fundamental differences with compiling directly for a specific harware. So you might as well consider comparing two languages compiled for direct execution on the same machine. This said, the issue of relative speed of languages is an old one, dating back to the first compilers. For many years, in those early times, professional considered that hand written code was faster than compiled code. In other words, machine language was considered faster than high level languages such as Cobol or Fortran. And it was, both faster and usually smaller. High level languages still developed because they were much easier to use for many people who were not computer scientists. The cost of using high level languages even had a name: the expansion ratio, which could concern the size of the generated code (a very important issue in those times) or the number of instructions actually executed. The concept was mainly experimental, but the ratio was greater than 1 at first, as compilers did a fairly simple minded job by today standards. Thus machine language was faster than say, Fortran. Of course, that changed over the years, as compilers became more sophisticated, to the point that programming in assembly language is now very rare. For most applications, assembly language programs compete poorly with code generated by optimizing compilers. This shows that one major issue is the quality of the compilers available for the language considered, their ability to analyse the source code, and to optimize it accordingly. This ability may depend to some extend on the features of the language to emphasize the structural and mathematical properties of the source in order to make the work easier for the compiler. For example, a language could allow the inclusion of statements about the algebraic properties of user defined functions, so as to allows the compiler to use these properties for optimization purposes. The compiling process may be easier, hence producing better code, when the programming paradigm of the language is closer to the features of the machines that will intepret the code, whether real or virtual machine. Another point is whether the paradigms implemented in the language are closed to the type of problem being programmed. It is to be expected that a programming language specialized for specific programming paradigms will compile very efficiently features related to that paradigm. Hence the choice of a programming language may depend, for clarity and for speed, of the choice of a programming language adapted to the kind of problem being programmed. The popularity of C for system programming is probably due to the fact that C is close to the machine architecture, and that system programming is directly related to that architecture too. Some other problem will be more easily programmed, with faster execution using logic programming and constraint resolution languages. Complex reactive systems can be very efficiently programmed with specialized synchronous programming languages like Esterel which embodies very specialized knowledge about such systems and generate very fast code. Or to take an extreme example, some languages are highly specialized, such as syntax description languages used to program parsers. A parser generator is nothing but a compiler for such languages. Of course, it is not Turing complete, but these compilers are extremely good for their specialty: producing efficient parsing programs. The domain of knowledge being restricted, the optimization techniques can be very specialized and tuned very finely. These parser generators are usually much better than what could be obtained by writing code in another language. There are many highly specialized languages with compilers that produce excellent and fast code for a restricted class of problems. Hence, when writing a large system, it may be advisable not to rely on a single language, but to choose the best language for different components of the system. This, of course, raises problems of compatibility. Another point that matters often is simply the existence of efficient libraries for the topics being programmed. Finally, speed is not the only criterion and may be in conflict with other criteria such as code safety (for exemple with respect to bad input, or resilience to system errors), memory use, ease of programming (though paradigm compatibility may actually help that), object code size, program maintainability, etc. Speed is not always the most important parameter. Also it may take different guises, like complexity which may be average complexity or worse case complexity. Also, in a large system as in a smaller program, there are parts where speed is critical, and others where it matters little. And it s not always easy to determine that in advance.
{ "domain": "cs.stackexchange", "id": 4449, "tags": "programming-languages, compilers" }
Is there a formulation for (self-)accelerating fluid flow through permeable medium?
Question: I have a permeable system where is an accelerating fluid flow. Imagine a sponge that is squeezed. The fluid starts at rest, accelerates and flows out from the sponge. How to calculate the fluid speed? Is it possible to derive an equation where the fluid flow has a feedback that increases fluid flow? Perhaps something similar than percolating water through a sand wall or dam when it starts to breach? A quasi-static solution could be for example, Bernoulli + Kozeny-Carman. In the below equations P is pressure, u is fluid velocity along x and t is time. Bernoulli's equation (one u -> dt on the right): $\frac{dP}{dx}\propto\frac{du}{dt}$ Kozeny-Carman (flow through permeable medium): $\frac{dP}{dx}\propto u$ Now the idea is that fluid is pushed through the sponge. According to Bernoulli this creates a pressure gradient as the fluid speed accelerates from rest to the output velocity. This velocity also defines a pressure according to Kozeny-Carman. Note that the external work is not included here (fluid just accelerates and is resisted by medium, is this a problem? maybe P has external and internal components?). If it is possible to assume that these pressure gradients are locally the same, you can write a relation: $ \frac{du}{dt} \propto u $ which has an exponential solution in time. The above is just an example what I had in mind and I was wondering if there exists a more rigorous treatments for accelerating fluids through resistive mediums. Both equations can be derived from Navier-Stokes, so there might be already a version where both of these components are. Answer: For the cases you are describing, you will generally need to account for evolution of porosity and permeability. For example in the sponge case, by squeezing the sponge you are compacting the pores - the pressure is likely slaved to the compaction rate (i.e. you would "squeeze harder" as the sponge stiffens due to strain hardening and the permeability decreases due to smaller pore throats; technically your rate of squeezing would be accelerating, assuming an elastic loading response.) In the sand-dam case, the permeability and porosity increase as sand is removed. In this system mass is not conserved (i.e. sand is removed by the flow). While the sand-dam system is self-accelerating, I would say the sponge system is driven more by outside forces. (For a sort of combination of the two, check out "injectites".) For a physics perspective, you might check out "Biot Consolidation" (some references here; the original paper is also quite readable). For a geology perspective, you might check out soft-sediment deformation. Edit: Note that classical Biot consolidation assumes a linear-elastic porous medium. This means 1) infinitesimal strains, 2) linear "Hooke Law" for stress vs. strain, and 3) all strains are elastic, i.e. reversible. For your sponge case, at the least you would need to consider finite strain (if you like differential geometry, this may interest you; only 3d so simpler than general relativity I guess?). However with finite strain the model "constant coefficients", such as permeability & elastic moduli, will also most likely change (e.g. permeability reduction, nonlinear elasticity). Finally, when the porosity reduction is irreversible, the term compaction is used (i.e. consolidation vs. compaction = elastic vs. plastic strain).
{ "domain": "physics.stackexchange", "id": 25450, "tags": "classical-mechanics, fluid-dynamics, navier-stokes, percolation" }
Deriving particle's wave equations
Question: My book tries to derive two equations. Unfortunately the book is in a language you wouldn't understand, so I will try to translate it as much as I can. $E = ℏw$ $p = ℏk$ It starts by lagrangian: $ S = \int_{t_0}^{t_1} L(x, y, \dot x, \dot y, t) dt$ Then the book writes down: $ p = \nabla s$ (eq 1) $ S(x,y, t) = S_0$ (eq 2) $ \frac{\partial S}{\partial t} + H(r, \nabla s, t) = 0$ (eq 3) $ H = \frac{1}{2m}(\nabla s)^2 + V(r,t)$ (when problem is stationary, then we get:) $H = E$ (eq 4) $E = -\frac{\partial S}{\partial t}$ (eq 5) To be honest, I didn't understand a single thing here. Let me write down my questions. I don't know why eq:1 is true. I believe gradient of $S$ is partial derivatives of $S$ with respect to $\dot x$, $\dot y$. I belive we're describing the scenarios where Kinetic energy is $\frac{1}{2}m\dot x^2$, $\frac{1}{2}m\dot y^2$. In which case their partial derivatives are $m\dot x$ and $m\dot y$ which are momentums in $x$ and $y$ axis. If I'm correct till now, now question, why do we the gradient over the $S$ ? if we had written $p = \nabla L$, I would understand but doing it on $S$ must be giving us momentums summed up over the whole path ? what advantage do we get from this ? It gets used in (eq 3), but it seems wrong to put the whole momentum summed up over the whole path for calculating total energy as (eq 4) is dependent on time and plugging in time, it gives tota energy at that specific time, but by $\nabla s$, we're putting the total momentum over the whole path summed up. Doesn't make sense. I couldn't fully understand what hamiltonian is, but I believe it's the $KE + U$ (total energy). but, don't see (eq 3) idea at all. I might be asking more than one question, but I hope it's possible to explain the all the equations in a clear way. Update Q1: In the following: $\frac{\partial S}{\partial t_1}$, we ask how much S changes when we change $t_1$. This basically means we're not changing the path in variation, but we're just saying what if we had started the path from $t1+\delta h$ which means the remaining path is the same. Correct ? Q2: I wonder how we derived $\frac{\partial S}{\partial t_1} = H(t1). My idea is as follows - we know that $H = vp - L$ from which we have: $L = vp - H$. I also understand that $\frac{\partial S}{\partial t_1} = -L(t1)$ from calculus. So we have: $\frac{\partial S}{\partial t_1} = -(vp - H)(t1)$ It seems like $vp$ is considered 0 here. Is this because at time $t1$, the speed is 0 ? but if we're observing the motion from $t=2$ to $t=5$, at $t=2$, speed wouldn't be 0. So why and how did we make it 0 ? Question 3: In most of the cases, Hamiltonian is total energy which is usually conserved. So at each point on the path, particle has the same total energy which is $H$ as well. We derived: $H = E = \frac{\partial S}{\partial t_1} = -\frac{\partial S}{\partial t_2}$. This is clear, but my book says: $E = -\frac{\partial S}{\partial t}$ note here that we're doing with respect to $t$ and not the initial or final point. Where did we get this from rigorously ? Question 4 I realize now book must be correct. I even asked this to the professor. $p$ must be the gradient of $S$, not only at initial/final points, but overally everywhere as long as potential energy is not dependent on velocity. What would be the proof though ? Answer: It starts by lagrangian: $ S = \int_{t_0}^{t_1} L(x, y, \dot x, \dot y, t) dt$ Then the book writes down: $ p = \nabla s$ (eq 1) This Eq. (1) must be taken to mean that the momentum at the endpoint is equal to the gradient of the classical action with respect to the endpoint position. Consider, for example, a free particle in one dimension. The action is: $$ S[x(t)] = \int_{t_1}^{t_2} \frac{1}{2}m\dot{x}^2 dt\;. $$ By varying the action with fixed endpoints ($\delta x(t_1)=\delta x(t_2) = 0$) we find the classical path to be a straight line: $$ x_{cl}(t) = x_1 \frac{(t-t_2)}{(t_1 - t_2)} + x_2 \frac{t-t_1}{t_2 - t_1} $$ and note that $$ \dot x_{cl}(t) = \frac{x_2 - x_1}{t_2 - t_1} $$ and further note that $$ p_2 \equiv m\dot x_{cl}(t_2) = \frac{m(x_2 - x_1)}{t_2 - t_1} $$ The classical action can be evaluated in this example to find: $$ S_{cl} = S[x_{cl}(t)] = S[x_1 \frac{(t-t_2)}{(t_1 - t_2)} + x_2 \frac{t-t_1}{t_2 - t_1}] $$ $$ =\frac{m}{2}\frac{(x_2 - x_1)^2}{t_2 - t_1}\;.\tag{A} $$ Eq. (A) above shows that the classical action (the action evaluated for a specific path, the classical path) is a function of four variables: the two time endpoints and the two space endpoints. Using Eq. (A), we find: $$ \frac{\partial S_{cl}}{\partial x_2} = \frac{m(x_2 - x_1)}{t_2 - t_1} = p_2\;, $$ as required. Of course, the above example and Eq. (A) are only true in the simple free particle case. However, it turns out (but is left as an exercise to the reader to show) that in the general one-dimensional case we can still write: $$ \frac{\partial S_{cl}}{\partial x_2} = \left.\frac{\partial L}{\partial \dot x}\right|_{x_{cl}}(t_2) = p_2 $$ and also $$ \frac{\partial S_{cl}}{\partial x_1} = -\left.\frac{\partial L}{\partial \dot x}\right|_{x_{cl}}(t_1) = p_1. $$ (Hint: To prove this consider variations about the classical path where one of the endpoint variations is not zero.) The generalization to three dimensions is: $$ \vec \nabla_2 S_{cl}\equiv\frac{\partial S_{cl}}{\partial \vec x_2} = \left.\frac{\partial L}{\partial \dot{\vec{x}}}\right|_{\vec x_{cl}}(t_2) $$ To be honest, I didn't understand a single thing here. Let me write down my questions. I have answered one of the questions. It is too much to ask to answer all your questions here. But I hope that the above answer to where your Eq 1 comes from will set you on the right path. While I can not go into detail about your other questions, I will note that the (Hamiltonian) energy $H$ is defined (in one dimension) as: $$ H = \dot x p - L = \dot x \frac{\partial L}{\partial \dot x} - L $$ and further, by varying the initial time $t_1$ (along with a corresponding variation in path to make sure the path remains the classical path), it can be shown that: $$ \frac{\partial S_{cl}}{\partial t_1} = H(t_1)\;. $$ and by varying the final time, it can be shown that: $$ \frac{\partial S_{cl}}{\partial t_2} = -H(t_2)\;. $$ For many systems the Hamiltonian energy $H$ is the same as the total mechanical energy $E$. This is not always true, but often true. Some other answers on this site discuss the specific conditions that must hold for $H$ to equal $E$. But, suffice it to say that whenever the potential $V$ is independent of the velocity $\dot x$ (which is often the case) and also when the kinetic energy $T$ is a homogeneous quadradic function of the velocity, we can write: $$ H(t_2) = E(t_2) = -\frac{\partial S_{cl}}{\partial t_2}\;, $$ that is, the Hamiltonian energy is equal to the total energy, which is equal to the negative of the derivative of the classical action with respect to time. When the Hamiltonian energy and total mechanical energy are conserved (constant in time) we can drop the time argument and write: $$ H = E = -\frac{\partial S_{cl}}{\partial t_2}=\frac{\partial S_{cl}}{\partial t_1}\;, $$ For example, in our simple free partial example $$ \frac{\partial }{\partial t_1}\left(\frac{m}{2}\frac{(x_2 - x_1)^2}{t_2 - t_1}\right) = \frac{m}{2}\frac{(x_2 - x_1)^2}{(t_2 - t_1)^2} = \frac{1}{2}m\dot x_{cl}^2 = E_{free} $$ Reference for you: Feynman and Hibbs, Quantum Mechanics and Path Integrals, Chapter 2-1.
{ "domain": "physics.stackexchange", "id": 96531, "tags": "classical-mechanics, lagrangian-formalism, hamiltonian-formalism" }
Eigenstates into which a system can be projected after a measurement
Question: I'm currently reading Dirac's Principles of Quantum Mechanics, on page 36, he says: Another assumption we make connected to the physical interpretation of the theory is that, if a certain real dynamical variable $\xi$ is measured with the system in a particular state, the states into which the system may jump on account of the measurement are such that the original system is dependent on them. On what physical basis can we make this assumption and why is it reasonable? Answer: This phenomenon is called the collapse of the wave function. It is one of the tenets of the Copenhagen interpretation of quantum mechanics. The eigenstates $|\xi_i\rangle$ of the $\Xi$ operator form a complete set. From linear algebra we have $$I=\sum_i|\xi_i\rangle\langle \xi_i|$$ where $I$ is the identity operator. We apply this to the state vector $|\psi\rangle$: $$|\psi\rangle=\sum_i|\xi_i\rangle\langle \xi_i|\psi\rangle$$ We have now expressed the state in terms of the $\Xi$ eigenstates. When we measure $\Xi$ and get $\xi_j$, we project the state vector onto the eigenstate using the projection operator $\mathbb{P}_j=|\xi_j\rangle\langle\xi_j|$. So after measurement we get $$\psi\longrightarrow N\mathbb{P}_j|\psi\rangle=N\langle\xi_j|\psi\rangle|\xi_j\rangle$$ where $N$ is the new normalization constant.
{ "domain": "physics.stackexchange", "id": 19812, "tags": "quantum-mechanics, measurement-problem, observables, wavefunction-collapse" }
What is physical meaning of $\kappa$ and $R$ in curved space?
Question: What is physical meaning of $\kappa$ and $R$ in curved space? $$dl^2 = \frac{dr^2}{1 - \kappa\frac{r^2}{R^2}} + r^2d\theta^2 + r^2\sin^2\theta d\phi^2$$ Answer: That's a metric for a space of constant curvature in 3 dimensions. Typically you would parameterise $\kappa$ as -1, 0, or 1 and $R$ as a non zero real number. So $\kappa$ just tells you if you have positive, negative or no curvature and $R$ tells you the magnitude of the curvature. $R$ can be thought of as a radius of curvature if you choose to embed your 3-space in a higher dimensional space. For example, if you take $\kappa=1$ and subsitute $$r=R \sin\psi$$ you'll get a 3 sphere metric (in spherical polars) with an $R^2$ overall factor.
{ "domain": "physics.stackexchange", "id": 5725, "tags": "differential-geometry" }
What will be the input_shape of tf.keras.layers.Conv3D be for these inputs
Question: I have many videos, and each video is made up of 37 images (there are 37 frames in the whole video). And the dimension of each image is (100, 100, 3).... So the shape of my dataset is [num_of_videos, 37, 100, 100, 3] If I want to pass these videos thorugh the tf.keras.layers.Conv3D()... what will be the input shape for the conv3D be: Conv3D(32, 3, padding='same', activation='relu', input_shape=[What will the input shape here be]) Answer: From Keras document, Input shape 5+D tensor with shape: batch_shape + (channels, conv_dim1, conv_dim2, conv_dim3) if data_format='channels_first' or 5+D tensor with shape: batch_shape + (conv_dim1, conv_dim2, conv_dim3, channels) if data_format='channels_last' Assuming data_format='channels_last' and num_of_videos may be used to decide batch_size(which is not required while defing input_shape), input_shape=[ 100, 100, 37, 3 ]
{ "domain": "datascience.stackexchange", "id": 9276, "tags": "machine-learning, deep-learning, neural-network, convolutional-neural-network, convolution" }