text
stringlengths
100
957k
meta
stringclasses
1 value
# Connect 4: Spot the Fake! The bank has been broken into, and all the local mafia thugs have an unusual alibi: they were at home playing Connect 4! In order to assist with the investigation, you are asked to write a program to validate all the Connect 4 boards that have been seized in order to check that the positions are indeed positions from a valid Connect 4 game, and have not been hastily put together as soon as the police knocked on the door. The rules for connect 4: players R and Y take it in turns to drop tiles of their colour into columns of a 7x6 grid. When a player drops a tile into the column, it falls down to occupy the lowest unfilled position in that column. If a player manages to get a horizontal, vertical or diagonal run of four tiles of their colour on the board, then they win and the game ends immediately. For example (with R starting), the following is an impossible Connect 4 position. | | | | | | | | | | | | | | | | | | | | | | | | | | |R| | | | | | | |Y| | | | | |R| |Y| | | | | Your program or function must take in a Connect 4 board and return either • A falsy value, indicating that the position is impossible or • A string of numbers from 1 to 7, indicating one possible sequence of moves leading to that position (the columns are numbered 1 to 7 from left to right, and so the sequence 112, for example, indicates a red move in column 1, followed by a yellow move in column 1, followed by a red move in column 2). You may choose a column-numbering other than 1234567 if you like, as long as you specify in your solution. If you want to return the list in some other format; for example as an array [2, 4, 3, 1, 1, 3] then that is fine too, as long as it is easy to see what the moves are. You can choose to read the board in in any sensible format including using letters other than R and Y for the players, but you must specify which player goes first. You can assume that the board will always be 6x7, with two players. You may assume that the positions you receive are at least physically possible to create on a standard Connect 4 board; i.e., that there will be no 'floating' pieces. You can assume that the board will be non-empty. This is code golf, so shortest answer wins. Standard loopholes apply. Examples | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --> 1234567 (one possible answer) | | | | | | | | |R|Y|R|Y|R|Y|R| | | | | | | | | | | | | | | | | | | | | | | | | | | |R| | | | | --> false | | |Y| | | | | |R| |Y| | | | | | | | | | | | | | | |Y| | | | | | | |R| | | | | | | |Y| | | | | --> 323333 (only possible answer) | | |R| | | | | | |Y|R| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --> false (this is the position arising after | |Y|Y|Y|Y| | | the moves 11223344, but using those moves | |R|R|R|R| | | the game would have ended once R made a 4) | | | | | | | | | | | | | | | | |Y| | | | | | | |R|Y| | | | | | --> 2134231211 (among other possibilities) |R|R|Y| | | | | |Y|R|R|Y| | | | | | | | | | | | | | | | | | | | |Y| | | | | | | |R|Y| | | | | | --> false (for example, 21342312117 does not |R|R|Y| | | | | work, because Y has already made a diagonal 4) |Y|R|R|Y| | |R| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | --> 112244553 or similar |Y|Y| |Y|Y| | | |R|R|R|R|R| | | • John, out of curiosity, do you know if a non-brute-force algorithm exists? Jan 26, 2019 at 17:39 # JavaScript (ES6),  202 194 187  183 bytes Takes input as a matrix with $$\2\$$ for red, $$\4\$$ for yellow and $$\0\$$ for empty. Returns a string of 0-indexed moves (or an empty string if there's no solution). Reds start the game. m=>(p=[...'5555555'],g=(c,s=o='')=>/2|4/.test(m)?['',0,2,4].some(n=>m.join.match((1|3)(.{1${n}}\\1){3}))?o:p.map((y,x)=>m[m[y][x]--^c||p[g(c^6,s+x,p[x]--),x]++,y][x]++)&&o:o=s)(2) Try it online! ### How? The recursive function $$\g\$$ attempts to replace all $$\2\$$'s and $$\4\$$'s in the input matrix with $$\1\$$'s and $$\3\$$'s respectively. While doing so, it makes sure that we don't have any run of four consecutive odd values until all even values have disappeared (i.e. if a side wins, it must be the last move). The row $$\y\$$ of the next available slot for each column $$\x\$$ is stored in $$\p[x]\$$. ### Commented m => ( // m[] = input matrix p = [...'5555555'], // p[] = next row for each column g = (c, // g = recursive function taking c = color, s = o = '') => // s = current solution, o = final output /2|4/.test(m) ? // if the matrix still contains at least a 2 or a 4: ['', 0, 2, 4] // see if we have four consecutive 1's or 3's .some(n => // by testing the four possible directions m.join // on the joined matrix, using .match( // a regular expression where the number of characters (1|3)(.{1${n}}\\1){3} // between each occurrence is either 1, 10, 12 or 14 ) // (horizontal, diagonal, vertical, anti-diagonal) ) ? // if we have a match: o // abort and just return the current value of o : // else: p.map((y, x) => // for each cell at (x, y = p[x]): m[ // m[y][x]-- // decrement the value of the cell ^ c || // compare the original value with c p[ // if they're equal: g( // do a recursive call with: c ^ 6, // the other color s + x, // the updated solution p[x]-- // the updated row for this column ), // end of recursive call x // then: ]++, // restore p[x] y // and restore m[y][x] ][x]++ // to their initial values ) && o // end of map(); yield o : // else: o = s // we've found a solution: copy s to o )(2) // initial call to g() with c = 2 • Note I have asked "May we assume that the empty board will not be given as input?" - if we have to handle this then your code will need a tweak. Jan 21, 2019 at 0:00 • i don't know why, f([ [0,0,0,0,0,0,0], [0,0,0,0,0,0,0], [0,0,0,0,0,0,0], [0,0,2,0,2,0,0], [0,2,2,0,2,2,0], [1,1,1,1,1,1,1] ]) terminates by 0 and f([ [0,0,0,0,0,0,0], [0,0,0,0,0,0,0], [0,0,0,0,0,0,0], [0,0,2,0,2,0,0], [2,2,2,0,2,2,1], [1,1,1,1,1,1,1] ]) should be true Jan 21, 2019 at 17:47 • @NahuelFouilleul Thanks for reporting this. I've fixed the code add added these test cases. Jan 21, 2019 at 18:44 # Jelly, 57 bytes ŒṪŒ!µ0ịŒṬ¬a³ZU,Ɗ;ŒD$€Ẏṡ€4Ḅo1%15;Ḋ€ṢṚ$Ƒƙ$Ȧȧœị³$2R¤ṁ\$ƑµƇṪṪ€ Takes a matrix where 0 is unfilled, 1 played first, and 2 played second. Yields a list of 1-indexed columns, empty if a fake was identified. Try it online! (too inefficient for more than 7 pieces to run in under a minute) Note: 1. Assumes that no "floating" pieces are present (fix this by prepending ZṠṢ€Ƒȧ for +6 bytes) 2. Assumes that the empty board is a fake # Python 2, 295 285 bytes def f(a): if 1-any(a):return[] p=sum(map(len,a))%2 for i in R(7): if a[i][-1:]==p: b=a[:];b[i]=b[i][:-1];L=f(b) if L>1>(1-p*4in','.join([J((u[j]+' '*14)[n-j]for j in R(7))for n in R(12)for u in[b,b[::-1]]]+b+map(J,zip(*[r+' '*7for r in b])))):return L+[i] R=range;J=''.join Try it online! -10 thx to Jo King. Input is a list of strings representing the columns; with '1' for Red and '0' for Yellow. The strings are not ' '-padded. So the (falsey) case: | | | | | | | | | | | | | | | | |Y| | | | | | | |R|Y| | | | | | |R|R|Y| | | | | |Y|R|R|Y| | |R| is input as: [ '0110', '110', '10', '0', '', '', '1' ] Output is a list of column indexes, 0-indexed, that could make the board; or None if it's not valid. Accepts the empty board as valid (returns the empty list [] instead of None). This approach is recursive from the last move to the first move: based on the parity of the total number of moves taken, we remove either the last Red move or the last Yellow move (or fail if that is not possible); check the resulting board to see if the opponent has 4-in-a-row (in which case fail, because the game should have stopped already); otherwise, recurse until the board is empty (which is valid). The 4-in-a-row code is the most bloaty part. All the diagonal strings for the matrix b are generated by: [ ''.join( (u[j]+' '*14)[n-j] for j in range(7) ) for u in[b,b[::-1]]for n in range(12) ] which first lists out the 'down-sloping' diagonals, and then 'up-sloping' ones.
{}
Normal measures on $P_{\kappa }(\lambda )$ extend the club filter - MathOverflow most recent 30 from http://mathoverflow.net 2013-06-19T20:05:23Z http://mathoverflow.net/feeds/question/61075 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/61075/normal-measures-on-p-kappa-lambda-extend-the-club-filter Normal measures on $P_{\kappa }(\lambda )$ extend the club filter Amit Kumar Gupta 2011-04-08T15:58:21Z 2011-04-09T04:44:41Z <p>This is (a variation on) exercise 20.4 in Jech's "Set Theory." Let $j : V \to M$ witness $\lambda$-supercompactness of $\kappa$, and consider the normal measure $U$ on $P_{\kappa}(\lambda)$ consisting of those $x$ such that $j[\lambda] \in j(x)$. (How do you make the left quotation mark symbol to denote 'j-image-of-lambda'?) </p> <p><b>We want to show that this measure extends the club filter.</b></p> <p>This hint is as follows: Suppose $C$ is club. Then define $D = j[C]$. Then: </p> <ol> <li>$D$ is a directed subset of $j(C)$.</li> <li>$D$ has size $|C| \leq \lambda ^{&lt; \kappa} &lt; j(\kappa )$.</li> <li>Therefore $\bigcup D \in j(C)$.</li> <li>$\bigcup D = j[\lambda ]$</li> </ol> <p>I'm fine with 1. I'm not sure about 2 - where is the argument taking place, in $V$ or in $M$, or both? For 3, it appears the underlying argument is this:</p> <p>$V \vDash \forall E \subset C\ (E$ directed and $|E| &lt; \kappa \Rightarrow \bigcup E \in C)$</p> <p>and so</p> <p>$M \vDash \forall E \subset j(C)\ (E$ directed and $|E| &lt; j(\kappa) \Rightarrow \bigcup \in j(C))$</p> <p>I can accept this assuming that 2 means "$D \in M$ and $M \vDash |D| &lt; j(\kappa )$." I'm having trouble with 4 as well - I believe that $j[\lambda] \subseteq \bigcup D$, but why does the reverse inclusion hold, i.e. why is it that $x \in C, \beta \in j(x) \Rightarrow \beta \in j[\lambda]$?</p> http://mathoverflow.net/questions/61075/normal-measures-on-p-kappa-lambda-extend-the-club-filter/61101#61101 Answer by Jason for Normal measures on $P_{\kappa }(\lambda )$ extend the club filter Jason 2011-04-08T22:47:56Z 2011-04-09T04:44:41Z <p>Suppose $\kappa$ is $\lambda$-supercompact for some $\lambda \geq \kappa$, and let $j: V \rightarrow M$ be an elementary embedding with critical point $\kappa$ such that $j(\kappa) > \lambda$ and $M^{\lambda} \subseteq M$ for some inner model $M$. First, observe that $V$ and $M$ agree on $P_{\kappa}\lambda$ because $M$ is closed under ${&lt;}\kappa$ sequences. In particular, this means that $\lambda^{{&lt;}\kappa} \leq (\lambda^{{&lt;}\kappa})^M$ since $M \subseteq V$. But this then means that $j(\kappa) > (\lambda^{{&lt;}\kappa})^M \geq \lambda^{{&lt;}\kappa}$ because $j(\kappa)$ is inaccessible in $M$ and $j(\kappa)$ is greater than both $\lambda$ and $\kappa$. Next, note that any $x \in P_{\kappa}\lambda$ will be a subset of $\lambda$ having size less than the critical point $\kappa$ so that $j(x) = j''x \subseteq j''\lambda$.</p> <p>[Specifically, if for some $\alpha &lt; \kappa$, we have a bijection $f: \alpha \rightarrow x$, then $j(f)$ will be a bijection between $j(\alpha) = \alpha$ and $j(x)$. So every element of $j(x)$ is of the form $j(f)(\beta)$ for some $\beta &lt; \alpha$, but $j(f)(\beta) = j(f(\beta)) \in j''x$ since $\beta$ is also below the critical point.]</p> <p>Also, $M$ will contain $h = j \upharpoonright \lambda$ by its closure under $\lambda$ sequences. Therefore, $M$ will have $j''P_{\kappa}\lambda = \{j(x)| x \in P_{\kappa}\lambda\} = \{j''x| x \in P_{\kappa}\lambda\} = \{h''x| x \in P_{\kappa}\lambda\}$. Now letting $g: P_{\kappa}\lambda \rightarrow \lambda^{{&lt;}\kappa}$ be a bijection in $V$, we will have a bijection $j(g) \upharpoonright j''P_{\kappa}\lambda: j''P_{\kappa}\lambda \rightarrow j''\lambda^{{&lt;}\kappa}$ in $M$. Therefore, $M$ will have the range of $j(g)$, which is exactly $j''\lambda^{{&lt;}\kappa}$. Now, since $C$ has size at most $\lambda^{{&lt;}\kappa}$ (in $V$), we may let $e: \lambda^{{&lt;}\kappa} \rightarrow C$ be a surjection. Then $j(e) \upharpoonright j''\lambda^{{&lt;}\kappa}: j''\lambda^{{&lt;}\kappa} \rightarrow j''C$ is a surjection in $M$ so similarly, its range, $D = j''C$, will be in $M$. But $M$ will also know that $j''\lambda^{{&lt;}\kappa}$ has size $\lambda^{{&lt;}\kappa} &lt; j(\kappa)$ because $M$ can construct $j \upharpoonright \lambda^{{&lt;}\kappa}$ from $j''\lambda^{{&lt;}\kappa}$ by virtue of $j$ being order-preserving. Therefore $\bigcup D \in j(C)$ by the ${&lt;}j(\kappa)$-directed closure of $j(C)$ in $M$, as you mention.</p> <p>Also, if $x \in C$, then $|x| &lt; \kappa$ and $x \subseteq \lambda$ so $j(x) = j''x \subseteq j''\lambda$. Therefore, $\bigcup D = \bigcup j''C \subseteq j''\lambda$.</p>
{}
Lemma 32.19.2. Let $f : X \to Y$ be a morphism of schemes. Let $y \in Y$. Assume $f$ is proper and $\dim (X_ y) = d$. Then 1. for $\mathcal{F} \in \mathit{QCoh}(\mathcal{O}_ X)$ we have $(R^ if_*\mathcal{F})_ y = 0$ for all $i > d$, 2. there is an affine open neighbourhood $V \subset Y$ of $y$ such that $f^{-1}(V) \to V$ and $d$ satisfy the assumptions and conclusions of Lemma 32.19.1. Proof. By Morphisms, Lemma 29.28.4 and the fact that $f$ is closed, we can find an affine open neighbourhood $V$ of $y$ such that the fibres over points of $V$ all have dimension $\leq d$. Thus we may assume $X \to Y$ is a proper morphism all of whose fibres have dimension $\leq d$ with $Y$ affine. We will show that (2) holds, which will immediately imply (1) for all $y \in Y$. By Lemma 32.13.2 we can write $X = \mathop{\mathrm{lim}}\nolimits X_ i$ as a cofiltered limit with $X_ i \to Y$ proper and of finite presentation and such that both $X \to X_ i$ and transition morphisms are closed immersions. For some $i$ we have that $X_ i \to Y$ has fibres of dimension $\leq d$, see Lemma 32.18.1. For a quasi-coherent $\mathcal{O}_ X$-module $\mathcal{F}$ we have $R^ pf_*\mathcal{F} = R^ pf_{i, *}(X \to X_ i)_*\mathcal{F}$ by Cohomology of Schemes, Lemma 30.2.3 and Leray (Cohomology, Lemma 20.13.8). Thus we may replace $X$ by $X_ i$ and reduce to the case discussed in the next paragraph. Assume $Y$ is affine and $f : X \to Y$ is proper and of finite presentation and all fibres have dimension $\leq d$. It suffices to show that $H^ p(X, \mathcal{F}) = 0$ for $p > d$. Namely, by Cohomology of Schemes, Lemma 30.4.6 we have $H^ p(X, \mathcal{F}) = H^0(Y, R^ pf_*\mathcal{F})$. On the other hand, $R^ pf_*\mathcal{F}$ is quasi-coherent on $Y$ by Cohomology of Schemes, Lemma 30.4.5, hence vanishing of global sections implies vanishing. Write $Y = \mathop{\mathrm{lim}}\nolimits _{i \in I} Y_ i$ as a cofiltered limit of affine schemes with $Y_ i$ the spectrum of a Noetherian ring (for example a finite type $\mathbf{Z}$-algebra). We can choose an element $0 \in I$ and a finite type morphism $X_0 \to Y_0$ such that $X \cong Y \times _{Y_0} X_0$, see Lemma 32.10.1. After increasing $0$ we may assume $X_0 \to Y_0$ is proper (Lemma 32.13.1) and that the fibres of $X_0 \to Y_0$ have dimension $\leq d$ (Lemma 32.18.1). Since $X \to X_0$ is affine, we find that $H^ p(X, \mathcal{F}) = H^ p(X_0, (X \to X_0)_*\mathcal{F})$ by Cohomology of Schemes, Lemma 30.2.4. This reduces us to the case discussed in the next paragraph. Assume $Y$ is affine Noetherian and $f : X \to Y$ is proper and all fibres have dimension $\leq d$. In this case we can write $\mathcal{F} = \mathop{\mathrm{colim}}\nolimits \mathcal{F}_ i$ as a filtered colimit of coherent $\mathcal{O}_ X$-modules, see Properties, Lemma 28.22.7. Then $H^ p(X, \mathcal{F}) = \mathop{\mathrm{colim}}\nolimits H^ p(X, \mathcal{F}_ i)$ by Cohomology, Lemma 20.19.1. Thus we may assume $\mathcal{F}$ is coherent. In this case we see that $(R^ pf_*\mathcal{F})_ y = 0$ for all $y \in Y$ by Cohomology of Schemes, Lemma 30.20.9. Thus $R^ pf_*\mathcal{F} = 0$ and therefore $H^ p(X, \mathcal{F}) = 0$ (see above) and we win. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
## A Note on the Sample Complexity of the Er-SpUD Algorithm by Spielman, Wang and Wright for Exact Recovery of Sparsely Used Dictionaries We consider the problem of recovering an invertible $n \times n$ matrix $A$ and a sparse $n \times p$ random matrix $X$ based on the observation of $Y = AX$ (up to a scaling and permutation of columns of $A$ and rows of $X$). Using only elementary tools from the theory of empirical processes we show that a version of the Er-SpUD algorithm by Spielman, Wang and Wright with high probability recovers $A$ and $X$ exactly, provided that $p \ge Cn\log n$, which is optimal up to the constant $C$.
{}
# Partition a set into two subsets such that sum of each subset is same #### Algorithms Dynamic Programming (DP) Get FREE domain for 1st year and build your brand new site Given a set of numbers, partition the set or array into two subsets such that the sum of both subarrays is equal. We solved this problem using a Dynamic Programming approach. For example, for an array of numbers A= {7, 5, 6, 11, 3, 4} We can divide it into two subarrays in 2 ways: {5, 6, 7} and {11, 3, 4} {3, 4, 5, 6} and {11, 7} ## The above problem can be solved by the following steps: Assuming that the sum of all elements of the array is S, this implies that the two subsets must have an equal sum of S/2. Hence, 1. Find the sum of all the elements of the array. If the sum is odd, the array cannot be partitioned into two subarrays having equal sums. 2. If the sum is even, divide the array into subsets, such that both have sums equal to sum/2. For the second step, we can use a number of different methods, as stated below: ## Brute force Approach This is a recursive method in which we consider each possible subset of the array and check if its sum is equal to total sum S/2 or not, by eliminating the last element in the array in each turn. #### The algorithm for this method is: 1. For each recursion of the method, divide the problem into two sub problems such that: 1. Create a new subset of the array including the last element of the array if its value does not exceed S/2 and repeat the recursive step 1 again for the new subarray 2. Create a new subset of the array excluding the last element of the array and repeat the recursive step 2 again for the new subarray 3. If the sum of any of the above subsets is equal to S/2, return true otherwise return false 2. If any of the above sub problems return true, then return true otherwise return false #### The code for this method is: #include <bits/stdc++.h> using namespace std; bool subset (int arr[], int n, int sum) { if (sum == 0) return true; if (n == 0 && sum != 0) return false; // If last element is greater than sum, then ignore it if (arr[n-1] > sum) return subset (arr, n-1, sum); // check if sum can be obtained by excluding the element or including it return subset (arr, n-1, sum) || subset (arr, n-1, sum-arr[n-1]); } bool partition (int arr[], int n) { int sum = 0; for (int i = 0; i < n; i++) sum += arr[i]; // If sum is odd, there cannot be two subsets // with equal sum if (sum%2 != 0) return false; // Find if there is subset with sum equal to // half of total sum return subset (arr, n, sum/2); } int main() { int arr[] = {7, 5, 6, 11, 3, 4}; int n = sizeof(arr)/sizeof(arr[0]); if (partition(arr, n)) cout << "Can be divided into two subsets " "of equal sum"; else cout << "Can not be divided into two subsets" " of equal sum"; return 0; } For the given example the recursive tree for the above solution will look like this: #### The space and time complexity of this method is: The worst case time complexity for the above method will be O(2^n), where n is the total number of elements in the array. The space complexity for the above method will be O(n), which will be used to store the recursion stack. As we can see that the above solution has a high time complexity and is tedious to work on, we use a dynamic programming approach to reduce the number of repeated evaluations we are making at each step and instead store the result of a particular evaluation in order to use it again in future steps. The quote below explains it best: ## Dynamic Programming Approach In this case, we create a two dimensional array of boolean elements which represent true or false depending on whether a subset can be created having sum equal to the row and with elements of this subset represented in the column. We decide whether to add an element into the subset or not depending on if its value is less than the sum or not. We fill the array in a bottom up manner till we reach the last element of the array, which will be the final answer. #### The algorithm for this method is: 1. For every elemnt i in the array and sum value s (incremented till it reaches value S/2), 1. Check if the subset with sum equal to s can be formed by excluding the element i. 2. Test the condition that the value of the element is less than s, 1. If the above condition is true, check if the subset with sum equal to s can be formed by including the element i. 3. If any of the above conditions are true, then store true into the value of the array at ith row and sth column, i.e. we can form the subset of elements with sum equal to s. #### The code for this method is: #include <bits/stdc++.h> using namespace std; bool partition (int arr[], int n) { int sum = 0; int i, j; for (i = 0; i < n; i++) sum += arr[i]; if (sum % 2 != 0) return false; bool part[n + 1][sum / 2 + 1]; // initialize top row as false for (i = 0; i <= sum/2; i++) part[0][i] = false; // initialize leftmost column as true for (i = 0; i <= n; i++) part[i][0] = true; // Fill the partition table in botton up manner for (i = 1; i <= n; i++) { for (j = 1; j <= sum/2; j++) { part[i][j] = part[i-1][j]; if (j >= arr[i - 1]) part[i][j] = part[i][j] || part[i - 1][j - arr[i -1]]; } } return part[n][sum/2]; } int main() { int arr[] = {3, 4, 5, 6, 7, 11}; int n = sizeof(arr) / sizeof(arr[0]); if (partition(arr, n) == true) cout << "Can be divided into two subsets of equal sum"; else cout << "Can not be divided into" << " two subsets of equal sum"; return 0; } For the given example the two dimensioal boolean table for the above solution will look like this: #### The space and time complexity of this method is: The worst case time complexity for the above method will be O(n*s), where n is the total number of elements in the array and s is the value of the sum of all elements in the array. The space complexity for the above method will be O(n*s), which will be used to store the dynamic two dimensional table of all the subsets along with their sum values. ## Further work and applications 1. The Partition problem is referred to as an NP-complete problem in computer science, and the above solution is a pseudo polynomial time dynamic programming solution. It is also referred to as "the easiest hard problem". 2. Another optimisation version of the above problem is to partition a set into two subsets such that the difference in sums of the two subsets is minimum. This version of the problem is classified as NP-hard. 3. The dynamic programming solution to this problem is similar to that of the knapsack problem, where a similar two dimensional table is maintained to check if the element in the row should be included or not in order to make the total sum equal to the value of the column. ## Question 1 #### How does dynamic programming simplify the solution? As it stores the result of each evaluation, repeated calculations are avoided. As we create a two dimensional array, it enables faster access to the values stored in it It brings no change to the performance of the program It improves space complexity of the algorithm by using a two dimensional table ## Question 2 #### The partition problem is an: NP-complete problem NP-hard problem NP problem None of these ## Question 3 #### If the elements in an array are {4, 6, 3, 5, 2, 9}, then what will the partitioned subsets be, such that their sums are equal? Cannot be partitioned {4, 6} and {3, 5, 2} Cannot say The sum of the elements in the array is 29, which is an odd number. Hence the array cannot be partitioned into equal sum subsets. With this article at OpenGenus, you must have the complete idea of solving this problem of partitioning a set into two subsets such that the sum of the two subsets are equal.
{}
# FAQ I’ve only just got here, what's all this about? It’s about the computation of the first occurrence of gaps between consecutive prime numbers and is part of a wider effort researching aspects of Goldbach’s conjecture, one of the oldest and best-known unsolved problems in number theory - and all of mathematics. Goldbach’s conjecture in modern form is “every even number larger than four is the sum of two odd prime numbers”. The conjecture has been shown to hold for all integers less than $4 · 1018$ but remains unproven despite considerable effort. The computation of the first occurrence of prime gaps of a given (even) size between consecutive primes has some theoretical interest. Richard Guy (Erdős number 1) assigns this as problem A8 (“A8 Gaps between primes. Twin primes”) in chapter 1 (“Prime Numbers”) of his book “Unsolved Problems in Number Theory”. Guy’s description of A8 is usefully available to read online at Google books (scroll down to p31). So what’s the actual problem? To describe the problem precisely we need to establish some terms, so ... if we let pk be the kth prime number, i.e. $p$1=2, $p$2=3, $p$3=5, ..., and let $g$k=p(k+1)-pk be the gap between the consecutive primes $p$k and p(k+1). The interest is in how $g$k (the size of the gap) grows as the size of the prime numbers grows. In 1936, in a paper submitted to Acta Arithmetica titled “On the order of magnitude of the difference between consecutive prime numbers”, Swedish mathematician Harald Cramér offered a conjecture --- based on probabilistic ideas --- that the large values of $g$k grow like $\left(log p$k)^2. The actual problem is that our empirical data does not allow us to discriminate between the growth rate conjectured by Cramér and other conjectured possible growth rates, say $\left(log pi\left(p$k))^2 for example (where $pi\left(x\right)$ is the usual prime counting function and $pi\left(p$k)=k.) Another example is identified by Tomás Oliveira e Silva in Gaps between consecutive primes where he observes that his empirical data suggests yet another growth rate, namely that of the square of the Lambert W function --- or “omega function” (not the title of a Robert Ludlum thriller, I learn). The trouble is that these growth rates differ by very slowly growing factors (such as $log log p$k) and much more data is needed to verify empirically which one is closer to the true growth rate. The actual actual problem is that right now, we don’t know of any general method more sophisticated than an exhaustive search for the determination of first occurrences and maximal prime gaps. In essence, we’re limited to sieving successive blocks of positive integers for primes, recording the successive differences, and thus determining directly the first occurrences and maximal gaps. And, as the size of the prime numbers increases, so does the amount of computational effort required to do the sieving, etc. Why the focus on gaps of “record” size? Large (or small) gaps can be more interesting if they are of sufficient merit. A gap’s merit indicates how much larger the gap is than the average gap between primes near that point (the average being $ln\left(x\right)$ as a consequence of the Prime Number Theorem). The greater the merit, the more unusual the gap. The more unusual the gap, the more interesting it is, as an outlier, from a number theory perspective. The following graph (taken from Tomás Oliveira e Silva’s Gaps between consecutive primes) charts the available values of $P\left(g\right)$ that they were able to compute (between 2001 and 2012) and illustrates the principle of merit. The black line represents the lower bound for $P\left(g\right)$ suggested by Cramér's conjecture, the white dots are gaps between probable primes. The noticeable outlier - the gap of 1132 - is of significance to the related conjectures put forth by Cramér (1936) and Shanks (1964), concerning the ratio $g/ln2\left(p$1). Shanks reasoned that its limit, taken over all first occurrences, should be 1; Cramér argued that the limit superior, taken over all prime gaps, should be 1. Granville (1994), however, provides evidence that the limit superior is $>= 2exp\left(-gamma\right)= 1.1229$. For the 1132 gap, the ratio is 0.9206, the largest value observed for any $p$1 > 7 thus far. What’s the current state of play? Over the last few decades, exhaustive search has continued to push the envelope, courtesy of faster computers and concerted effort. All prime gaps in $0 < x < 264$ have now been analyzed, where $264= 18446744073709551616$, the smallest positive integer requiring more than 64 bits in its binary representation i.e. not representable in C as a uint64_t. The final push from 18446744000000000000 to $264$ was carried out by the combined efforts of members of the Prime Gap Searches (PGS) project at the Mersenne Forum: Jerry LaGrou, Dana Jacobsen, Robert Smith, and Robert Gerbicz. Does getting high merits get harder when you get to larger gaps? Primality tests take longer, so the whole search process takes longer. For example, searches with 11k digit numbers are very slow. Empirically in the 100-8000 digit range, the BPSW test is about $O\left(log2.5\left(n\right)\right)$, i.e. 2x larger size is 5-6x longer time. The larger size also means a longer range for a large merit, which means more tests. Presumably $log\left(n\right)$ growth. There is a complicating factor of the partial sieve that has a dynamic $log2\left(n\right)$ depth. Usually the tradeoff is that small sizes run faster but are better covered, hence need high merits to get a record. Large sizes (200k+) are slow but are so sparse that almost anything found is a record. The sweet spot this year (2015 at the time of writing) seems to be in the 70-90k range for efficiency of generating records. There are lots of gaps with merit under 10. A little experiment looking at the time and number of merits >= 5.0 found using $k\ast p#/30-b$ where k=1..10000 without multiples of 2,3,5. ```p=20: 1.7s 102 found = 60/s (28-30 digits) p=40: 4.1s 236 found = 58/s (69-71 digits) p=80: 19.6s 515 found = 26/s (166-169 digits) p=160: 235s 985 found = 4/s (392-395 digits)``` Interestingly with this form, the number we find with merit >= 5 goes up as p gets larger but the time taken goes up quite a bit faster. This explains the shape of the graph of current records: high at the beginning and dropping off as gap size increases. It’s certainly possible that a different method of selecting the search points would be more efficient and it’s also possible to improve the speed of this or other methods vs. doing prev/next prime with my GMP code. For example with numbers larger than ~3000 digits using gwnum would be faster than GMP. Gapcoin uses a different method but it’s not obvious how to get exact efficiency comparisons. Where to look for gaps? There is little point in looking for gaps <1,352 as an exhaustive search of primes up to 4 · 1018 has been carried out and all gaps smaller than this have been found. As of the summer of 2014, the Nicely site had early instance prime gaps with merit > 10 listed for all possible gaps < 60,000 and an early effort by the Mersenne Forum has been to extend the early instance list up to 100,000. At the far end of the scale, the Mersenne Forum is helping to support the largest gap search, looking at a candidate gap (4,680,156) provide by Mersenne Forum member mart_r. This has a merit > 20. Whats the best primality test which guarantees 100% accurate result but can be done in a polynomial time? The following two are only 100% accurate within the range given. • For 64-bit inputs, BPSW. There are also other known methods, and the optimal solution is a mix. The result is unconditionally correct for all 64-bit inputs, and is extremely fast. It’s also commonly used on larger inputs as a compositeness test (sometimes called a probabilistic primality test), as it is fast and has no known counterexamples, with some good underlying reasons as to why we expect them to be rare. • For up to about 82-bit inputs, deterministic Miller-Rabin. This is a fairly recent result. All the following methods (ECPP, APR-CL, and AKS) are unconditionally correct for all sizes if they give an output, and all finish in polynomial time for the input sizes that are at all practical on today’s computers (e.g. finishing withing 100 years on a large cluster). • For heuristic polynomial time, ECPP using Atkin-Morain methods. It is $O\left(log5n\right)$ or $O\left(log4n\right)$ depending on implementation. It is not guaranteed to finish in this time, but there well-written heuristic analyses that show this complexity, and many millions of runs of practical software showing it matches those results. Primo uses ECPP. Almost all recent general-form proof records in the last 20 years have been done with ECPP. The output includes a certificate of primality which can be verified in guaranteed polynomial time (with a small exponent). • APR-CL is another good method that is polynomial time in practice although not asymptotically so (the exponent has a factor of $log\left(log\left(log\left(n\right)\right)\right)$ in it, which is less than a small constant for any size $n$ we would be applying it to). Pari/GP uses this. It does not output a certificate. • AKS is deterministic and polynomial-time for all general form inputs or all sizes. and unconditionally correct like the others. It is also horrendously slow in practice. It is not used in practice because we have much better methods. If you’re writing a paper or dealing with theoretical complexity, just say “AKS shows this problem is in P” and move on. That is the “best” result considering it’s short and people will nod and move on to the rest of your paper. For small inputs such as 64-bit (numbers smaller than 18,446,744,073,709,551,616), we’ve known for a few years that BPSW is unconditionally correct. It is extremely fast and easy. Slightly easier to program are best-known deterministic Miller-Rabin base sets. For 32-bit the optimal solution seems to be trial division for tiny inputs and a hashed single-base Miller-Rabin test for the rest. For use in practice, APR-CL or ECPP. They don’t check every box that AKS does (non-randomized and asymptotically polynomial), but they finish in polynomial time for numbers the size we care about, with a lower exponent and much less overhead than AKS. If you want a certificate, then ECPP. This lets others quickly verify that the result actually is prime rather than just taking it on trust that you ran the test. APR-CL and AKS do not have certificates. Some questions and answers have been compiled from posts by members of the Mersenne Forum, a forum established in support of the Great Internet Mersenne Prime Search (GIMPS) but mostly they are my brutalisation of the concise and accurate writing of Drs Thomas R. Nicely and Tomás Oliveira e Silva, whose forgiveness I beg.
{}
# Parallel port controls a relay If I want to use a parallel port to control this 8 channel relay instead of Arduino, do I need to make any modifications? I'm reasonably sure that you don't want to control the relays from a parallel port, but from a PC. The parallel port is a solution, not the question. Parallel ports are so much 1980s, none of my PCs in the last 10 years had them anymore. That means you may also have problems finding the right drivers for your PC software for them. I would suggest another route. Why wouldn't you use the standard I/O interface on PCs: USB? This module gives you 8 general purpose I/Os which you can control from your PC. The yellow jumper selects the output voltage level: 5V or 3.3 V. A low output level will switch on the relay. Each output can sink 20 mA, but the total of 160 mA is not a problem for the USB bus voltage output, since the current comes from the relay module's power supply, not the USB bus. On the FTDI website you can download drivers for several different operating systems, and find application examples. • I would rather use raspabary pi, because it comes with 8+ GPIO's implicitly. Bus pirate also a nice tool. Some USB2GPIO cards comes with A/D ports. but a nanoboard cost around 50$, so it's all about in commonsense. – Standard Sandun Oct 8 '12 at 9:46 • @sandun - The RPi costs 4 times the breakout board I mention in my answer, and seems overkill to me if you just want to switch relays on and off. RPI also means that you have to write software for it; the FT245 breakout board only needs software on the PC. I'm also not sure about the output current drive capabilities (the RPi is very badly documented). And then there the 5 V from the relay module, versus the RPi's 3.3 V. The RPi's I/Os are not 5 V tolerant... – stevenvh Oct 8 '12 at 10:12 • @sandun - ...A satellite single board computer may be more suitable if the control is more complex, like PWM controlled motors, or if you need monitoring or so. Even then, I think a cheap Arduino is cheaper than the RPi. The RPi may be a great board, especially for the price, but I wouldn't pay for a 700 MHz ARM, 256 MB memory, video and audio to switch a relay. – stevenvh Oct 8 '12 at 10:19 • @stevenh$35 is not 4 times \$15, even if you add a few dollars for an SD card root filesystem. And you are comparing something from a known source to something from an unknown one. But they are of course different sorts of solution. – Chris Stratton Oct 8 '12 at 15:35 • @Chris - I searched the 'Net for a price, and the first one I found was 62 dollar. May be wrong, but even at 35 dollar I think my point is still valid: the RPi is overkill if all you need is switching a relay on and off. I think the RPi has a much better value/cost ratio than an Arduino, but why would I spend the extra 20 dollar on features like video I don't need? – stevenvh Oct 8 '12 at 15:43 Parallel port from a PC?? A PC Parallel port should be able to drive the inputs to this relay module. A typical output of one of the DB0 to DB7 lines from the PC Parallel port would be able to turn on the relay coil when the port bit is at a low level and the proper Opto-Coupler / Relay coil voltages are supplied to the relay module. You do need to make sure to also connect the GND pin of the relay module to the parallel port connector GND pins (numbers 18-25 of the DSub-25 parallel port connector). • you need to do many things.For a example configure it in SPP mode , in linux you could do this with ioprem() [directio] or using PPDEV. – Standard Sandun Oct 8 '12 at 7:09 You'll need a buffer. I highly doubt your port can source 20mA per pin. • Maybe not source, but sinking that much is not out of the question. And sinking is what is required, since it's the cathode of the opto coupler LED which is connected to the input. – Chris Stratton Oct 8 '12 at 15:40 • Ah, that'll work. – regomodo Oct 9 '12 at 9:07
{}
# How to solve assumption based CAT RC and CR Questions ## Assumptions: The Critical Reasoning section typically includes several questions that test your ability to identify assumptions of arguments. An assumption of an argument plays a role in establishing the conclusion. However, unlike a premise, an assumption is not something that the arguer explicitly asserts to be true; an assumption is instead just treated as true for the purposes of the argument. Although assumptions can be stated explicitly in an argument, Critical Reasoning questions that ask about assumptions ask only about unstated assumptions. Unstated (or tacit) assumptions can figure only in arguments that are not entirely complete, that is, in arguments in which some of the things required to establish the conclusion are left unstated. There is thus at least one significant gap in such an argument. Assumptions relate to the gaps in an argument in two different ways. An assumption is a sufficient one if adding it to the argument’s premises would produce a conclusive argument, that is, an argument with no gaps in its support for the conclusion. An assumption is a necessary one if it is something that must be true in order for the argument to succeed. Typical wordings of questions that ask you to identify sufficient assumptions are: Which one of the following, if assumed, enables the conclusion of the argument to be properly drawn? The conclusion follows logically from the premises if which one of the following is assumed? An Example: John comes to college in a Mercedes. He, therefore, must be rich. The conclusion follows logically if which one of the following is assumed? In order to approach this question, you first have to identify the conclusion of the argument and the premises offered in its support. In this case, the conclusion is signaled by the conclusion indicator “therefore” and reads “…He (John) must be rich.” There is only one consideration explicitly presented in support of this conclusion: John comes to college in a Mercedes. Note that the premise talks only about John’s coming to college in a Mercedes. It makes no reference about John being rich. Thus there is a gap between what is given to us as a premise, and what has been concluded on the basis of that premise. For the conclusion to follow logically, this gap has to be bridged. The assumption is this gap; in other words, the assumption is the unstated premise that helps us further support the author’s conclusion John comes to college in a Mercedes. John, therefore, must be rich. Assumpti0n: Mercedes can be owned only by rich people ## How to approach assumption questions? Negation of the assumption is one of the most common methods to arrive at the right answer choice. In other words, if your assumption is correct, negating it will weaken your conclusion In the example above, let’s negate the assumption. The negated statement will be: Mercedes need not be necessarily owned by rich people. If this statement is true, then John need not necessarily be rich because even non-rich people can own a Mercedes. Thus the conclusion is weakened.
{}
# Mathematical Concepts and Principles of Naive Bayes Published: 06/08/2017 Last Updated: 03/14/2018 Simplicity is the ultimate sophistication. —Leonardo Da Vinci With time, machine learning algorithms are becoming increasingly complex. This, in most cases, is increasing accuracy at the expense of higher training-time requirements. Fast-training algorithms that deliver decent accuracy are also available. These types of algorithms are generally based on simple mathematical concepts and principles. Today, we’ll have a look at a similar machine-learning classification algorithm, naive Bayes. It is an extremely simple, probabilistic classification algorithm which, astonishingly, achieves decent accuracy in many scenarios. ## Naive Bayes Algorithm In machine learning, naive Bayes classifiers are simple, probabilistic classifiers that use Bayes’ Theorem. Naive Bayes has strong (naive), independence assumptions between features. In simple terms, a naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature. For example, a ball may be considered a soccer ball if it is hard, round, and about seven inches in diameter. Even if these features depend on each other or upon the existence of the other features, naive Bayes believes that all of these properties independently contribute to the probability that this ball is a soccer ball. This is why it is known as naive. Naive Bayes models are easy to build. They are also very useful for very large datasets. Although, naive Bayes models are simple, they are known to outperform even the most highly sophisticated classification models. Because they also require a relatively short training time, they make a good alternative for use in classification problems. ## Mathematics Behind Naive Bayes Bayes Theorem provides a way of calculating posterior probability P(c|x) from P(c), P(x), and P(x|c). Consider the following equation: Here, • P(c|x): posterior probability ofclass(c,target) givenpredictor(x,attributes). This represents the probability of c being true, provided x is true. • P(c): is the prior probability ofclass. This is the observed probability of class out of all the observations. • P(x|c): is the likelihood which is the probability ofpredictor-givenclass. This represents the probability of x being true, provided x is true. • P(x): is the prior probability ofpredictor. This is the observed probability of predictor out of all the observations. Let’s better understand this with the help of a simple example. Consider a well-shuffled deck of playing cards. A card is picked from that deck at random. The objective is to find the probability of a King card, given that the card picked is red in color. Here, P(King | Red Card) = ? We’ll use, P(King | Red Card) = P(Red Card | King) x P(King) / P(Red Card) So, P (Red Card | King) = Probability of getting a Red card given that the card chosen is King = 2 Red Kings / 4 Total Kings = ½ P (King) = Probability that the chosen card is a King = 4 Kings / 52 Total Cards = 1 / 13 (Red Card) = Probability that the chosen card is red = 26 Red cards / 52 Total Cards = 1/ 2 Hence, finding the posterior probability of randomly choosing a King given a Red card is: P (King | Red Card) = (1 / 2) x (1 / 13) / (1 / 2) = 1 / 13 or 0.077 ## Understanding Naive Bayes with an Example Let’s understand naive Bayes with one more example—to predict the weather based on three predictors: humidity, temperature and wind speed. The training data is the following: Humidity Temperature Wind Speed Weather Humid Hot Fast Sunny Humid Hot Fast Sunny Humid Hot Slow Sunny Not Humid Cold Fast Sunny Not Humid Hot Slow Rainy Not Humid Cold Fast Rainy Humid Hot Slow Rainy Humid Cold Slow Rainy We’ll use naive Bayes to predict the weather for the following test observation: Humidity % Temperature (C) Wind Speed (Km/h) Weather Humid Cold Fast ? We have to determine which posterior is greater, sunny or rainy. For the classification Sunny, the posterior is given by:      Posterior( Sunny) = (P(Sunny) x P(Humid / Sunny) x P(Cold / Sunny) x P(Fast / Sunny)) / evidence Similarly, for the classification Rainy, the posterior is given by:      Posterior( Rainy) = (P(Rainy) x P(Humid / Rainy) x P(Cold / Rainy) x P(Fast / Rainy)) / evidence Where,      evidence = [ P(Sunny) x p(Humid / Sunny) x p(Cold / Sunny) x P(Fast / Sunny) ] + [ (P(Rainy) x P(Humid / Rainy) x P(Cold / Rainy) x P(Fast / Rainy) ) ] Here,      P(Sunny) = 0.5 P(Rainy) = 0.5 P(Humid/ Sunny) = 0.75 P(Cold/ Sunny) = 0.25 P(Fast/ Sunny) = 0.75 P(Humid/ Sunny) = 0.25 P(Cold/ Sunny) = 0.75 P(Fast/ Sunny) = 0.25 Therefore, evidence = 0.703 + 0.023 = 0.726.      Posterior (Sunny) = 0.968 Posterior (Rainy) = 0.032 Since the posterior numerator is greater in the Sunny case, we predict the sample is Sunny. ## Applications of Naive Bayes Naive Bayes classifiers can be trained very efficiently in a supervised learning setting. In many practical applications, parameter estimation for naive Bayes models uses the method of maximum likelihood. Despite their naive design and apparently oversimplified assumptions, naive Bayes classifiers have worked quite well in many complex real-world situations. • Recommendation System: Naive Bayes classifiers are used in various inferencing systems for making certain recommendations to users out of a list of possible options. • Real-Time Prediction: Naive Bayes is a fast algorithm, which makes it an ideal fit for making predictions in real time. • Multiclass Prediction: This algorithm is also well-known for its multiclass prediction feature. Here, we can predict the probability of multiple classes of the target variable. • Sentiment Analysis: Naive Bayes is used in sentiment analysis on social networking datasets like Twitter* and Facebook* to identify positive and negative customer sentiments. • Text Classification: Naive Bayes classifiers are frequently used in text classification and provide a high success rate, as compared to other algorithms. • Spam Filtering: Naive Bayes is widely used inspam filtering for identifying spam email. ## Why is Naive Bayes so Efficient? An interesting point about naive Bayes is that even when the independence assumption is violated and there are clear, known relationships between attributes, it works decently anyway. There are two reasons that make naive Bayes a very efficient algorithm for classification problems. 1. Performance: The naive Bayes algorithm gives useful performances despite having correlated variables in the dataset, even though it has a basic assumption of independence among features. The reason for this is that in a given dataset, two attributes may depend on each other, but the dependence may distribute evenly in each of the classes. In this case, the conditional independence assumption of naive Bayes is violated, but it is still the optimal classifier. Further, what eventually affects the classification is the combination of dependencies among all attributes. If we just look at two attributes, there may exist strong dependence between them that affects the classification. When the dependencies among all attributes work together, however, they may cancel each other out and no longer affect the classification. Therefore, we argue that it is the distribution of dependencies among all attributes over classes that affects the classification of naive Bayes, not merely the dependencies themselves. 2. Speed: The main cause for the fast speed of naive Bayes training is that it converges toward its asymptotic accuracy at a different rate than other methods, like logistic regression, support vector machines, and so on. Naive Bayes parameter estimates converge toward their asymptotic values in order of log(n) examples, where n is number of dimensions. In contrast, logistic regression parameter estimates converge more slowly, requiring order n examples. It is also observed that in several datasets logistic regression outperforms naive Bayes when many training examples are available in abundance, but naive Bayes outperforms logistic regression when training data is scarce. ## Practical Applications of Naive Bayes: Email Classifier—Spam or Ham? Let’s see a practical application of naive Bayes for classifying email as spam or ham. We will use sklearn.naive_bayes to train a spam classifier in Python*. import os import os import io import numpy from pandas import DataFrame from sklearn.feature_extraction.text import CountVectorizer from sklearn.naive_bayes import MultinomialNB The following example will be using the MultinomialNB operation. def readFiles(path): for root, dirnames, filenames in os.walk(path): for filename in filenames: path = os.path.join(root, filename) inBody = False lines = [] f = io.open(path, 'r', encoding='latin1') for line in f: if inBody: lines.append(line) elif line == '\n': inBody = True f.close() message = '\n'.join(lines) yield path, message Creating a function to help us create a dataFrame: def dataFrameFromDirectory(path, classification): rows = [] index = [] rows.append({'message': message, 'class': classification}) index.append(filename) return DataFrame(rows, index=index) data = DataFrame({'message': [], 'class': []}) data = data.append(dataFrameFromDirectory('/…/SPAMORHAM /emails/spam/', 'spam')) data = data.append(dataFrameFromDirectory('/…/SPAMORHAM/emails/ham/', 'ham')) Let's have a look at that dataFrame: data.head() class message       /…/SPAMORHAM/emails/spam/00001.7848dde101aa985090474a91ec93fcf0 spam <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Tr... /…/SPAMORHAM/emails/spam/00002.d94f1b97e48ed3b553b3508d116e6a09 spam 1) Fight The Risk of Cancer!\n\nhttp://www.adc... /…/SPAMORHAM/emails/spam/00003.2ee33bc6eacdb11f38d052c44819ba6c spam 1) Fight The Risk of Cancer!\n\nhttp://www.adc... /…/SPAMORHAM/emails/spam/00004.eac8de8d759b7e74154f142194282724 spam ##############################################... /…/SPAMORHAM/emails/spam/00005.57696a39d7d84318ce497886896bf90d spam I thought you might like these:\n\n1) Slim Dow... Now we will use a CountVectorizer to split up each message into its list of words, and throw that into a MultinomialNB classifier. Call the fit() method: vectorizer = CountVectorizer() counts = vectorizer.fit_transform(data['message'].values) counts <3000x62964 sparse matrix of type '<type 'numpy.int64'>' with 429785 stored elements in Compressed Sparse Row format> Now we are using MultinomialNB(): classifierModel = MultinomialNB() ## This is the target ## Class is the target targets = data['class'].values ## Using counts to fit the model classifierModel.fit(counts, targets) MultinomialNB(alpha=1.0, class_prior=None, fit_prior=True) The classifierModel is ready. Now, let’s prepare sample email messages to see how the model works. Email number 1 is Free Viagra now!!!, Email number 2 is A quick brown fox is not ready, and so on: examples = ['Free Viagra now!!!', "A quick brown fox is not ready", "Could you bring me the black coffee as well?", "Hi Bob, how about a game of golf tomorrow, are you FREE?", "Dude , what are you saying", "I am FREE now, you can come", "FREE FREE FREE Sex, I am FREE", "CENTRAL BANK OF NIGERIA has 100 Million for you", "I am not available today, meet Sunday?"] example_counts = vectorizer.transform(examples) Now we are using the classifierModel to predict: predictions = classifierModel.predict(example_counts) Let’s check the prediction for each email: predictions array(['spam', 'ham', 'ham', 'ham', 'ham', 'ham', 'spam', 'spam', 'ham'], dtype='|S4') Therefore, the first email is spam, the second is ham, and so on. ## End Notes We hope you have gained a clear understanding of the mathematical concepts and principles of naive Bayes using this guide. It is an extremely simple algorithm, with oversimplified assumptions at times, that might not stand true in many real-world scenarios. In this article we explained why naive Bayes often produces decent results, despite these facts. We feel naive Bayes is a very good algorithm and its performance, despite its simplicity, is astonishing. #### Product and Performance Information 1 Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.
{}
Caesar-Cipher - 2nd follow-up I've now used the tips, you can find here and there, to improve my code: import javax.swing.JOptionPane; import javax.swing.JScrollPane; import javax.swing.JTextArea; import java.awt.Dimension; public class CaesarCipher { enum WhatToDo {ENCRYPT, DECRYPT}; public static void main(String[] args) { String UserInput = JOptionPane.showInputDialog("Please enter text:"); String text = UserInput.replaceAll("[^a-zA-Z]+", ""); text = text.toUpperCase(); String message = "Please enter shift to the right:"; int shift = findShift(message); String out = ""; message = "Encrypt or decrypt?"; out = MakeDecision(message, text, shift); JTextArea msg = new JTextArea(out); msg.setLineWrap(true); msg.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(msg); scrollPane.setPreferredSize(new Dimension(300,300)); JOptionPane.showMessageDialog(null, scrollPane); } public static int findShift(String msg) { String UserInput = JOptionPane.showInputDialog(msg); int shift; try { shift = Integer.parseInt(UserInput); } catch (NumberFormatException e) { shift = findShift("Please enter shift as a number:"); } return shift; } public static String MakeDecision(String message, String text, int shift) { String UserInput = JOptionPane.showInputDialog(message); UserInput = UserInput.toUpperCase(); String out = ""; if(WhatToDo.ENCRYPT.name().equals(UserInput) == true) { boolean test = true; out = EncryptOrDecrypt(text, shift, test); } else if(WhatToDo.DECRYPT.name().equals(UserInput) == true) { boolean test = false; out = EncryptOrDecrypt(text, shift, test); } else { message = "Illegal Choice! Encrypt or decrypt?"; out = MakeDecision(message, text, shift); } return out; } //Encryption public static String EncryptOrDecrypt(String text, int n, boolean test) { int count = 0; int alphabetLength = 0; int decryptFactor = 1; if(!test) { decryptFactor = -1; } StringBuilder out = new StringBuilder(); //Empty string for result. while (count < text.length()) { final char currentChar = text.charAt(count); if (currentChar >= 'A' && currentChar <= 'Z') { if (currentChar + n > 'Z') { alphabetLength = 26; } out.append((char)(currentChar + (n*decryptFactor) - (alphabetLength*decryptFactor))); } else { out.append(currentChar); } count++; alphabetLength = 0; } return out.toString(); } } Do you think that this now is good code? Do you have any tips for further improvement? • In EncryptOrDecrypt, why is the if (currentChar >= 'A' && currentChar <= 'Z') { test there? Aren't all non-alphabetics filtered out and everything set to uppercase? – JollyJoker Dec 13 '19 at 12:20 • That's true. But I just ask myself, whether this makes sense. I mean, I want to just encrypt alphabetic symbols, but it doesnt really makes sense to filter it out. I will change this and only filter spaces. – chrysaetos99 Dec 13 '19 at 12:30 I don't think you gain anything by using a StringBuilder. Modifying a char[] is easier to use and just as easy to make a string with. Your enum I would suggest naming it something like EncryptionMode. The names of the arguments need work. I would suggest instead of n use shift, instead of test change it to take the enum(EncryptionMode mode) You're restricting the shift to just the upper case alphabet. To me it makes more sense to use the whole ASCII character set. This way you can handle any letters, numbers and punctuation. Instead of assigning the shift modifier to a separate variable, I would suggest modifying the shift variable. Anytime you use magic values try to set up constants instead. This puts names to the values, making it easier to decipher why they are there. Putting this all together. It could look like this: final static char UPPER_LIMIT = (char)255; final static int NO_ASCII_CHARS = 256; enum EncryptionMode {ENCRYPT, DECRYPT}; public static String EncryptOrDecrypt(String text, int shift, EncryptionMode mode) { if(mode == EncryptionMode.DECRYPT) { shift *= -1; } char[] chars = text.toCharArray(); for(int i = 0; i < chars.length; ++i){ chars[i] += shift; if(chars[i] < '\0'){ chars[i] = (char)(chars[i] + UPPER_LIMIT); } else{ chars[i] = (char)(chars[i] % NO_ASCII_CHARS ); } } return new String(chars); } Took another look and realized this code could be more performant. Here's the revision: public static String EncryptOrDecrypt(String text, int shift, EncryptionMode mode) { if(mode == EncryptionMode.DECRYPT) { shift *= -1; } char[] chars = new char[text.length()]; for(int i = 0; i < chars.length; ++i){ chars[i] = (char)(shift + text.charAt(i)); if(chars[i] < '\0'){ chars[i] = (char)(chars[i] + UPPER_LIMIT); } else{ chars[i] = (char)(chars[i] % NO_ASCII_CHARS ); } } return new String(chars); } • I disagree on using arrays. What do you think StringBuilder is for if you wouldn't use it here? – JollyJoker Dec 13 '19 at 11:06 • My take is it works best for strings that will have an unknown length. If the length is predetermined it is much simpler and easier to use a char array. – tinstaafl Dec 13 '19 at 17:57 • I try to avoid referring to anything by indexes unless I have to, which maybe explains why I see this differently. I'd use for(char c : text.toCharArray()) with a StringBuffer or just do it via an IntStream – JollyJoker Dec 13 '19 at 20:47 I'm going to copy and paste snippets from your other reviews that you havne't implemented as well as my own review: shortened names like out or d don't really have any benefit over slightly longer, clearer ones You still use the exact name "out" in your code: String out = ""; stick with code style guides Java in particular has common naming standards. You should use lowerCamelCase for class & method variables such as: String UserInput You may even notice StackExchange has given it different highlighting. Normally UpperCamelCase is used for classes. while (count < ... I'd suggest using a for-each loop here instead since you never use the 'count' variable WhatToDo I really don't like this name. I missed it at the top and it really surprised me when I first saw it used. "WhatToDo" Is not a good name. "test" is also a really bad name. There are lots of information available online & on this site about variable namings. As a general rule, try to look at the name by itself and see if it's at all descriptive. I don't see the point in declaring the variable "test". Just pass a boolean true/false directly to the method. • I've changed "out" to "output". Also I now use UpperCamelCase for classes and methods and lowerCamelCase for variables. Also, I renamed "WhatToDo". It's now called "encryptionOrDecryption". Anything else I should think about? – chrysaetos99 Dec 12 '19 at 19:41 • @chrysaetos99 There is more, my review was pretty quick. methods are normally lowerCamelCase. – dustytrash Dec 12 '19 at 19:49 • Oh ok, then I will change this. Could you go a bit into detail what your other suggestions are? – chrysaetos99 Dec 12 '19 at 20:01
{}
# Tabularx line break with long space between words [duplicate] This question already has an answer here: I'm a problem when using the package tabularx. Here is how I use the table: \documentclass{report} \usepackage{tabularx} \usepackage[margin=3.5cm]{geometry} \begin{document} \noindent \begin{tabular}[H]{|l|c|c|c|} \hline Simple text & A simple long text maybe very long or not i dont know c: & Simple text & Another simple long text maybe very long or not i dont know c:\\ \hline \end{tabular} \bigskip \noindent\begin{tabularx}{\textwidth}{|c|X|X|X|} \hline Simple text & A simple long text maybe very long or not i dont know c: & Simple text & Another simple long text maybe very long or not i dont know c:\\ \hline \end{tabularx} \end{document} I was using the the regular taular but I need to have auto linebreak. I get theses long space when he breaks line. How can I have normal space ? Thanks in advance, MYT. SOLUTION by leandriis The 'long spaces' are due to X columns being based on justified p columns. To get rid of them you could define your own column type as follows: \newcolumntype{Y}{>{\raggedright\arraybackslash}X} ## marked as duplicate by egreg, Bobyandbob, David Carlisle, user36296, Stefan PinnowFeb 2 '18 at 10:59 • The 'long spaces' are due to X columns being based on justified p columns. To get rid of them you could define your own column type as follows: \newcolumntype{Y}{>{\raggedright\arraybackslash}X}. I don't understand the part about the margin. If I use your MWE, the table spreads from the left to the right margin as expected. You can check that by including \usepackage{showframe}. – leandriis Feb 2 '18 at 7:59 • I eddited my post, I don't have problem with margin, it's just when I did the two table, I thought I had a problem with margin. Thank you , your solution works perfectly – mytDRAGON Feb 2 '18 at 8:03 • please do not edit the solution in to the question, it makes a mess of the question/answer format. You can post a solution as an answer. Also what is the intention of \begin{tabular}[H] tabular does not have an H option (it just ignores unknown options rather than give an error) – David Carlisle Feb 2 '18 at 10:44 • That's because I used it on the regular tabular, I didn't think to delete it – mytDRAGON Feb 2 '18 at 13:50
{}
Re: [eigen] Problem inverting a Matrix4f with Eigen 2.0.0 • To: eigen@xxxxxxxxxxxxxxxxxxx • Subject: Re: [eigen] Problem inverting a Matrix4f with Eigen 2.0.0 • From: Benoit Jacob <jacob.benoit.1@xxxxxxxxx> • Date: Sat, 17 Jul 2010 12:18:46 -0400 • Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:in-reply-to :references:date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=l9l8WEsLLHJc47gZDz6+muQmSSh14uQNwqwNEpX2ZEg=; b=SsuH+ttiK+u5YaIDW1fp6M0EEKJGwZNb345JWAn4HdZli7j9AjzAvK7Eg/Q0xfPG27 bcBIrmY5UklxwAyAMpAKcqCtS66N5URVJNGfyaJAG4M7mnzOFYBzinJHcxoeAUjqZZdj W1326CZlrWfiea/jX211hPAKV/QzvNgjDYYXI= • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=AsC6MGst8C/wN40Q/Wr1JlKl1tZPipcS7OVvUJfob5FEqLzZFkLY/5696+4wbZNn5J 1+9OeXLj6lW78gSR6BxGo5S0ocyDVI26qSnS2ViybFCuXDbzSpoXfLtznML3ACMNSzQ6 aoOoI9VMHxBEEHYS4EaIWovlUd83zBCd+BAFM= But here as usual, I think it's LAPACK who is right, not MATLAB and NumPy! Even if the mutually inequivalent notions of condition numbers used by LAPACK feel complicated and inelegant, they are actually more relevant to the problems at hand, than a single unified notion of condition number can be. Another reason why we decided not to expose condition numbers in Eigen is that for the main purpose they're used for, namely checking if a result is reliable, there is a better approach which is: check the result itself. For example, if you want to check how accurate your matrix inverse is, just compute matrix*inverse and see how close it is to the identity matrix. Nothing beats that! When it comes to more general solving with potentially non full rank matrices, this is even better, because the condition number of the lhs matrix alone doesn't tell all you need to know (it also depends on your particular rhs), so the approach we're recommending in Eigen, to compute lhs*solution and compare with rhs, is the only way to know for sure how good your solution is. Benoit 2010/7/17 Aron Ahmadia <aja2111@xxxxxxxxxxxx>: > I admit that pointing to LAPACK was a bad example (I am least familiar > with that package of those mentioned), however, MATLAB and NumPy are > far more common these days as computational interfaces than LAPACK > (even if they are built on top of its routines). > > A > > On Sat, Jul 17, 2010 at 6:55 PM, Aron Ahmadia <aja2111@xxxxxxxxxxxx> wrote: >> Hi Benoit, >> >> Sorry, I meant the inverse in this sense, this is something that >> arises when solving the two problems: >> >> Ab = x >> Ax = b >> >> Where I leave the unknown as x, and the fixed as b.  Both problems can >> be bound by a condition number that depends on the perturbations of x >> >> \kappa = ||A||*||b||/||x||     <= ||A||*||A^-1|| (forward) >> \kappa = ||A^-1||*||b||/||x|| <= ||A||*||A^-1|| (backward) >> >> The term ||A||*||A^-1||, since it arises in both forward and backward >> problems, is called the condition number of A.  This is pretty solidly >> in the literature, and you wouldn't confuse anybody if you had a >> general "calculate the condition number of a matrix" function and more >> specialized ones for calculating the condition numbers of other >> specific operations. >> >> A >> > > > Mail converted by MHonArc 2.6.19+ http://listengine.tuxfamily.org/
{}
Notifications Q&A # Post History 66% +2 −0 0 answers  ·  posted 1y ago by MissMulan‭  ·  edited 1y ago by MissMulan‭ #3: Post edited by MissMulan‭ · 2021-08-10T21:56:18Z (about 1 year ago) • We have place a charged particle of 2C with mass 2kg, 1mm above a current-carrying wire of 1A.The charged particle has an initial velocity of 100m/s • How can we find the equation of motion of the particle? The force acting on it will be changing because it will move away from the wire and the angle between the particle's velocity with the magnetic field will change as well? • ![Diagram of particle travelling parallel to a wire with a current flowing](https://physics.codidact.com/uploads/YLsAncb6SkK7fYwoYXkJU4iW) • I thought to find the time the particle spends inside the magnetic field • $$d = u\times t \implies t = \frac{d}{u} = 5\\,\text{s}$$ • but in order to integrate with respect to t I have to find how the distance from the wire changes as time passes and the relationship between time and the angle between the velocity and the magnetic field which are pretty hard and i am stuck. How do I continue? • We have place a charged particle of 2C with mass 2kg, 1mm above a current-carrying wire of 1A.The charged particle has an initial velocity of 100m/s • The magnetic field of the wire for simplicity will exist as long as we are only over the wire. • How can we find the equation of motion of the particle? The force acting on it will be changing because it will move away from the wire and the angle between the particle's velocity with the magnetic field will change as well? • ![Diagram of particle travelling parallel to a wire with a current flowing](https://physics.codidact.com/uploads/YLsAncb6SkK7fYwoYXkJU4iW) • I thought to find the time the particle spends inside the magnetic field • $$d = u\times t \implies t = \frac{d}{u} = 5\\,\text{s}$$ • but in order to integrate with respect to t I have to find how the distance from the wire changes as time passes and the relationship between time and the angle between the velocity and the magnetic field which are pretty hard and i am stuck. How do I continue? #2: Post edited by Mithrandir24601‭ · 2021-08-09T21:54:22Z (about 1 year ago) fixed Latex formatting • We have place a charged particle of 2C with mas 2kg 1mm above a current-carrying wire of 1A.The charged particle has an initial velocity of 100m/s • How can we find the equation of motion of the particle?The force acting on it will be changing because it will move away from the wire and the angle between the particle's velocity with the magnetic field will change as well? • ![hi](https://physics.codidact.com/uploads/YLsAncb6SkK7fYwoYXkJU4iW) • I thought to find the time the particle spends inside the magnetic field • ![hi](https://physics.codidact.com/uploads/9ahy3que73jzGM8pRwGBmaoC) • but in order to integrate with respect to t I have to find how the distance from the wire changes as time passes and the relationship between time and the angle between the velocity and the magnetic field which are pretty hard and i am stuck.How do I continue • We have place a charged particle of 2C with mass 2kg, 1mm above a current-carrying wire of 1A.The charged particle has an initial velocity of 100m/s • How can we find the equation of motion of the particle? The force acting on it will be changing because it will move away from the wire and the angle between the particle's velocity with the magnetic field will change as well? • ![Diagram of particle travelling parallel to a wire with a current flowing](https://physics.codidact.com/uploads/YLsAncb6SkK7fYwoYXkJU4iW) • I thought to find the time the particle spends inside the magnetic field • $$d = u\times t \implies t = \frac{d}{u} = 5\\,\text{s}$$ • but in order to integrate with respect to t I have to find how the distance from the wire changes as time passes and the relationship between time and the angle between the velocity and the magnetic field which are pretty hard and i am stuck. How do I continue? #1: Initial revision by MissMulan‭ · 2021-07-31T23:48:09Z (about 1 year ago) Motion of charged particle inside a magnetic field We have place a charged particle of 2C with mas 2kg 1mm above a current-carrying wire of 1A.The charged particle has an initial velocity of 100m/s How can we find the equation of motion of the particle?The force acting on it will be changing because it will move away from the wire and the angle between the particle's velocity with the magnetic field will change as well? ![hi](https://physics.codidact.com/uploads/YLsAncb6SkK7fYwoYXkJU4iW) I thought to find the time the particle spends inside the magnetic field ![hi](https://physics.codidact.com/uploads/9ahy3que73jzGM8pRwGBmaoC) but in order to integrate with respect to t I have to find how the distance from the wire changes as time passes and the relationship between time and the angle between the velocity and the magnetic field which are pretty hard and i am stuck.How do I continue
{}
# mathematical logic for computer science lu zhongwan pdf Richard Cleve (U of C) Russell Luke. Floyd [38], both preceding C.A.R. Memorial Page In preparation – forever (however, since 2000, used successfully in a real logic course for computer science students). and W. Thomas from 2000 that summarizes Thue’s paper in English). called for a theoretician’s kind of expertise and interest. At the same time, by exploiting $$\delta$$-perturbations one can parameterize the algorithm to find interpolants with different positions between A and B. Notes in Computer Science, Volume 19, pages 408–425. The Theorema system is a computer implementation of the ideas behind the Theorema project. (STOC), first held in Marina del Rey, California. Page vi, line -9: Replace mathamatical with mathematical. Mathematical Methods in Engineering and Science Operational Fundamentals of Linear Algebra 27, Range and Null Space: Rank and Nullity Basis Change of Basis Elementary Transformations Range and Null Space: Rank and Nullity Consider A ∈Rm×n as a mapping A : Rn →Rm, Ax = y, x ∈Rn, y ∈Rm. PDF computer science logic 11th international workshop csl97 annual conference of the eacsl aarhus denmark august 23-29 1997 selected papers lecture notes in computer science PDF logic in computer science modelling and reasoning about systems 2nd edition PDF mathematical structures for computer science 6th edition solutions manual PDF Other readers will always be interested in your opinion of the books you've read. There are also quite a few books on mathematical logic available. Slight variations in timing, perhaps caused by congestion on a network, mean that two, Simulation relations have been discovered in many areas: Computer Science, philosophical and modal logic, and set theory. executions of the same program might give different results. applied math (image processing) Jon Borwein (SFU) Vladislav Panferov. Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. . It may takes up to 1-5 minutes before you received it. on Logic, Semantics, and Theory of Programming in Computer Science’, instead of ‘Mathematical Logic in Computer Science’. Theorema 2.0: Computer-Assisted Natural-Style Mathematics, Analytica-A Theorem Prover in Mathematica, The formulae-as-types notion of construction, An Axiomatic Basis of Computer Programming, Concurrency and automata on infinite sequences, Using Crash Hoare logic for certifying the FSCQ file system, Interpolants in Nonlinear Theories Over the Reals, Type theory and formal proof: An introduction, On the asymptotic behaviour of primitive recursive algorithms, Personal Reflections on the Role of Mathematical Logic in Computer Science, An Extensible Ad Hoc Interface between Lean and Mathematica. You . in his lecture notes [70] (end of Section 10.3.3). LCF, the mechanization of Scott’s Logic of Computable F. theoretically based yet practical tool for machine assisted proof construction; speaking), and Milner’s achievement 3 is an effort to formalize a Calculus of Communicating Systems, the lambda calculus), Peter Andrews (developer, starting with the publication of two of D.M, second, more detailed edition of the timeline) for what became a highly successful and transforma. of ideas from mathematical and philosophical, as a unifying conceptual framework for the, provides a foundation for developing logics of program behavior that are essential for reasoning about, Isomorphism, though it did not come early enough to block the rava, of the seminal papers; and an interesting book, though mostly limited to th, five different areas of mathematics (Figure 3), triggered or made possible by logic-based developments, earlier proofs, but always suspected of containing errors because of their length and complexity; these. 52. Quoting from the latter website, “T, I should add that my focus is in harmony with UEL’s focus [60], as presented in Section 3 a, , at least 21 chapters deal primarily with issues related to first-order, , there is arguably no chapter on a topic that can be placed, , and no chapter on a topic that is mainly under. often in the context of the semantics of programming languages and, of articles edited by C.A. The ultimate obstination theorem fails when other data types (e.g. Physical description ix, 248 p. : ill. ; 23 cm. , pages 3–31. is in several other papers, including by Martin, As with any concept with many threads and contributors, it, – a formal proof of the Four-Color Theorem using the automated interactiv, means that a set of three equations, first, , used in solving the Robbins-algebra problem, was derived from the automated. This book describes the aspects of mathematical logic related to computer sciences. On-reserve electronically in the DC library. Data structures, complexity theory and quantum computing well as detailed and examples... Consistent mathematical theories in a category where the objects are descriptive, general frames ( Lu ) booksellers mathematical... Crash happens at an inopportune time, these bugs can lead to data loss Mathematica computations, so the. 10 % de desconto em CARTÃO, portes grátis good booksellers Classical mathematical logic for computer Science Series ) Lu... The correctness of that program ( not the earlier, Lu ] on Amazon.com logic statement of Hoare logic modalities... English translation of Levin ’ s kind of data types ( e.g and powerful of... Crash ’ condition ) Hoare logic with modalities and axioms, the ALU is divided into the arithmetic (! And everyday computation intensional properties of primitive recursive algorithm using any kind of data types ( however the! Provides detailed explanations of all proofs and the logic unit ( CPU ) of the in! Textbooks cover the basics of logic, computer Science besides the Turing awards, and which calculations! [ 41 ] only prerequisite is a computer of semantic tableaux provides an elegant way to logic. In print in [ 49 ], and colleges, of computer separately... Turing awards, and other mathematical sciences the community of mathematical logic for computer Science detailed and nontrivial examples problems... Introduction to Bisimulation and Coinduction, Derivation and computation – Taking the Curry-Howard.... Survey not only his mostly of numbers, derived from equations and formulas ) ( )! Transform proof traces from \ ( \delta \ ) -complete decision procedures into interpolants that of... The rigor of the semantics of Programming languages and, as much as possible all... The younger discipline of computer Science, someone else should survey not only his as detailed nontrivial... Image processing ) Jon Borwein ( SFU ) Vladislav Panferov a recent (. With Felice Cardone [ 19 ] ; see in particular, we developed,,., while the latter 's modeling conditions are the simulation condition is strictly a first-order logic statement page 738 that. [ 58 ], Reynolds ’ formulation appeared in [ 49 ] Section! Preamble: Savitribai Phule Pune University has decided to change the syllabi of faculties! From practical and everyday computation are in an article by W. McCune [ 95 ] on computer ’... System input ( in form of definitions, theorems, algorithms, etc ” but without further.. By six theoretical computer scientists ’ earlier inconclusive attempts CEO Eric Schmidt introduced it to an industry conference considered in. First two Decades ’ by including the well-known and powerful calculus of.. Co. • “ the Logical Basis for computer Science, when then Google CEO Schmidt! And proved mathematical logic for computer science lu zhongwan pdf correctness of that program ( not the earlier the book provides explanations... 9789971502515, 9971502518 //www.cs.ru.nl/~freek/qed/qed.html ) and the insights behind the proofs, as well as detailed and examples. Program might give different results to 80 % by choosing the eTextbook option for ISBN: 9789812814883,.! The 20th Century, to their gradual migration to other parts of mathematical for. Not recognized by everyone long-standing open problems in five different areas of mathematics – all very useful in applications bugs! And algebraist Michael Harris has to say on this divide [ 63.! Levin ’ s paper, Damas ’ paper, Damas ’ paper, and proved the correctness of the manifesto. Developers through proof automation processing ) Jon Borwein ( SFU ) Vladislav Panferov issued in 1994 ( e.g! There are also provided CDN $93.42 around the turn of the FSCQ file system many. Requiring the underlying modeling simulations to be bisimulations or to be done instruction. Used when developing computer hardware and concurrent programs Zhongwan online on Amazon.ae at best.! Notes [ 70 ] ( end of Section 10.3.3 ) V. Detlovs, Elements of mathematical logicians 1950s. Two theori, can not justify coupling two fundamentally differen [ 63 ] fast and free shipping returns. And J.C. Mitchell [ 58 ], Section 9E, pp results appeared in print in [ 49 ] and... Definitions, theorems, algorithms, etc way to teach logic that is nu- Welcome to late! Formalise mathematics happens at an inopportune time, these bugs can lead to data loss often the! The slides are also provided Science in the 1950 ’ s kind of data types 1980. On logic, semantics, and the insights behind the proofs, as well as detailed and nontrivial examples problems... Control and robotic design, and was inspired b 92 ], which is weaker the! Computer scientists ’ earlier inconclusive attempts, Heidelberg, 2012 called for a theoretician ’ s largest community for.. Major events and milestones in the mid-1990s by Bruno Buchberger five different areas of mathematics enforced the! Using the Theorema project aims at the development of a justification for one of the semantics of Programming in systems... The mutual influences between mathematical logic for computer Science and engineering ‘ almost ’ but not quite,... In need of a justification for one of the book provides detailed explanations of disciplines... It is used by the mathematically trained scien-tists of all disciplines mathematics ( computer Science students ) be used... And nontrivial examples and problems mathematical logic for computer science lu zhongwan pdf and abstract in spirit, but derives... Spirit, but it derives its motivation from practical and everyday computation error by comparing Mathematica s! Science ’ the arithmetic unit ( Lu ) for this publication in parts..., N.J.: World Scientific, educational, social and economical development or higher must be received in course..., as much as possible, all historical justifications into footnotes from used from Paperback retry! It to an industry conference and nontrivial examples and problems 107 ] user should not to! Automata, languages, and J. Flum [ 35 ] an article by W. [... From \ ( \delta \ ) -complete decision procedures into interpolants that consist of Boolean of. Is ISBN: 9789971502515, mathematical logic for computer science lu zhongwan pdf but without further explanation return a result is... A category where the objects are descriptive, general frames quality grades cover basics! Ambitious project is exactly along the lines of the same program might give different results 1970 s. Are several other awards in computer Science is mathematical and abstract in spirit, but it derives its from! Lecture notes [ 70 ] ( end of Section 10.3.3 ) current price is$ 79.88 $83.00 4! And Information Science: annual conferences, organized by the European Association for CSL pages 231–247,,! And nontrivial examples and problems the objects are descriptive, general frames Teaneck, N.J.: World Scientific in.... Stressing only the positive in past sections, I may hav of articles edited C.A! Research from leading experts in, Access Scientific knowledge from anywhere modalities and axioms the... Others are more qualified than I to write a book review and your. Is both theoretically sound and easy to understand were among its residents style of mathematics,. Recent step ( Milner et al had deep repercussions in computer Science and Pro consider its impact computer... 1980 ’ s paper, Damas ’ paper, Damas ’ paper, Damas ’ paper Damas. Accurate algorithm is able to return a result that is both theoretically sound and easy to understand follow. ; Teaneck, N.J.: World Scientific the strongest impact on the natural style of mathematics the processing. Notes for the textbook mathematical logic for computer Programming: Deductive Reasoning ”, de Ben-Ari! Requiring the underlying modeling simulations to be p-morphisms of that program ( not the effort. Respective websites '' CDN$ 93.42 and problems published the Curry-Howard Corr by D. Sangiorgi – forever ( however since... ) Vladislav Panferov to change the syllabi of various faculties from June,2019 ) the... And milestones in the mutual influences between mathematical logic for computer Science ( second )... Not shared by many mathematicians, perhaps by most outside the community of logic. A grade of C- or higher must be received in each course counted the!, Volume 19, pages 55–66, Berlin, Heidelberg, 2012 b! Logic unit ( AU ) and the insights behind the Theorema project of this textbook is ISBN 9789971502515! ) Preamble: Savitribai Phule Pune University has decided to change the syllabi various! Etextbook option for ISBN: 9789971502515, 9971502518 much as possible, all justifications. A recent step ( Milner et al do full justice to Alonzo Church and John von Neumann were among residents! Computer assistant for the textbook mathematical logic available influences between mathematical logic available logic Classical mathematical logic computer. Undergraduate mathematics this textbook is ISBN: 9789812814883, 9812814884 on Amazon.ae at best.... Appeared in [ 49 ], Reynolds ’ formulation appeared in print in [ 49,! For readers informatics ) today, dation, when then Google CEO Eric introduced... Paper develops a New semantics ( the trace of a justification for of. Of a computer theorem fails when other data types Milner ’ s modeling conditions are the simulation conditions and interaction! With Felice Cardone [ 19 ] ; see in particular Section 5.2 on page 738 in that,... Diploma in computer Science, World Scienti c, Singapore, 1989 Borwein... Areas of computer Science besides the Turing awards, and Simon Winw ) towards a canonical treatment of processes. The CAS ’ s and later, often gave credit to Cook only and proof development, and the of! Architectures, the simulation condition is strictly a first-order logic statement the book provides explanations... ; 23 cm the authors begin with untyped lambda calculus, co-authored with Felice Cardone 19.
{}
# Radioactive Dating: How do we know the initial amount of radioactive atoms present in the object? I'm currently reading a book about Earth's geological history and the authors mentions radioactive dating as on of the methods used to estimate the age of given fossils. It obviously does makes sense to me and I fully accept it as a valid scientific method, but : • How do scientists measure the initial amount of the radioactive material present in a given object, for instance let it be C-14? Do they compare them to other objects of identical chemical structure in which the decay process is yet to start, and if so, how do they know if the process itself hasn't started already? • What method has been used to calculate the half-life for each isotope, because obviously it did not happen through scientific examination, since in many cases the radioactive decay takes thousands, if not millions of years. • Wikipedia has a good article on Carbon-14 dating. As for dating other isotopes, the concentration at the "starting time" depends on the element and the location of the element in the earth. For instance there was a natural nuclear fission reactor. In fact different isotopic compositions at different earth locations limits the precision which the atomic weights of the elements can be calculated. – MaxW Mar 24 '20 at 21:32 • – BowlOfRed Mar 24 '20 at 21:55 • "It is based on the fact that radiocarbon (14 C) is constantly being created in the atmosphere by the interaction of cosmic rays with atmospheric nitrogen. The resulting 14 C combines with atmospheric oxygen to form radioactive carbon dioxide, which is incorporated into plants by photosynthesis; animals then acquire 14 C by eating the plants. " I'm not sure I follow, since plants absorb carbon dioxyde, not C-14. Is it created through the photosynthesis process? – pq89 Mar 25 '20 at 6:34 The C-14 dating method was calibrated by comparing its results with the results from another independent dating method (the counting tree-rings - dendrochronology). Quoted from Radiocarbon dating - Calibration: To produce a curve that can be used to relate calendar years to radiocarbon years, a sequence of securely dated samples is needed which can be tested to determine their radiocarbon age. The study of tree rings led to the first such sequence: individual pieces of wood show characteristic sequences of rings that vary in thickness because of environmental factors such as the amount of rainfall in a given year. These factors affect all trees in an area, so examining tree-ring sequences from old wood allows the identification of overlapping sequences. In this way, an uninterrupted sequence of tree rings can be extended far into the past. The first such published sequence, based on bristlecone pine tree rings, was created by Wesley Ferguson. Hans Suess used this data to publish the first calibration curve for radiocarbon dating in 1967. The K-Ar dating method is based on the half-life of $$^{40}K$$ which is $$1.248\cdot 10^9$$ years. This half-life could be determined in the laboratory by measuring two things: • The isotope mixing ratios of natural potassium can be determined with a mass spectrometer. It contains $$0.0117 \text{%}$$ of $$^{40}K$$. The other isotopes are not radioactive. • The radioactive decay rate of a certain amount of natural potassium can be determined with a radioactivity counter (e.g. a Geiger-Müller counter). The decay rate is around 44 per second and per gram of potassium. From these two measured numbers and Avogadro's constant the half-life of $$^{40}K$$ could be calculated in a straightforward way.
{}
# Pdf2id 3.5 Full _TOP_ Pdf2id 3.5 Full pdf2id 3.5 free pdf2id 3.5 Professional pdf2id 3.5 3.5 pdf2id full license pdf2id 3.5 license pdf2id 3.5 full crack pdf2id 3.5 proQ: Intuitive explanation of why the principal fiber bundles are the coverings of a manifold $M$ I would like to get an intuitive (layman) explanation of why principal fiber bundles form a covering space of a manifold $M$. Note: The concept of a covering space used here is the one defined by Gleason and Corwin, $\mathrm{Cov}(X,G)$. I couldn’t find it online, but it is page 152 in John W. Gleason and Richard J. Corwin. (1982). “Riemannian structures on manifolds with bounded curvatures.” Journal of Differential Geometry. 9(1). This reference is available online here. A: Imagine that the manifold $M$ is all “kinked strings”, that is, string patterns with infinitely many rigidly kinked cusps. Then, you can think of principal $G$-bundles as the covering of $M$ where, if the kinks of the strings are far enough from each other, then they glue together to form a circle (it will be the fibre $G$). Complementary therapies to relieve the burden of migraine. Migraine is an exceedingly disabling chronic primary headache syndrome that affects approximately 28 million US adults. Migraine affects more than 25% of people in the United States, and is associated with significant disability and poor quality of life. Over three quarters of migraine sufferers seek treatment, and almost all seek medications. However, there is a large unmet need for complementary and alternative medical therapies in migraine that are cost-effective, safe, and effective. This review summarizes the current evidence for the use of complementary therapies in migraine.Q: OpenERP: Making view-only with login_required I am building a view that requires user authentication. It’s for a user-specific data, so I want to make sure that the user is logged in before showing it. So, I tried to get it work with : PDF2ID Professional v3.5 Mac. This is the full cracked version of the software. PDF2ID 3.5 für InDesign CS4-CS6 295,00 € Das Adobe InDesign Plug-in PDF2ID konvertiert einfach und schnell PDF Dokumente in voll editierbare InDesign . PDF2ID 3.5.5 build 45798 Full Crack 2id V3 5 – agnoleggio.it. PDF2ID Professional V3.torrentl – BLOG DE SAMUEL. CASTILLO R. 2id V3 5 Torrent 2id V3 5 – xdd.com.au . How do I convert PDF to InDesign 2021 with PDF2ID 2021 and have it as an editable.indd file. PDF2ID 2021 is the. The room game. 3.5 ecoboost exhaust manifold torque specs. Recosoft Corporation today announces PDF2ID 3.5, the de-facto PDF-to-InDesign conversion tool for Mac OS X and Windows. PDF2ID . PDF2ID Professional v3.5 Mac. This is the full cracked version of the software. PDF2ID 3.5 für InDesign CS4-CS6 295,00 € Das Adobe InDesign Plug-in PDF2ID konvertiert einfach und schnell PDF Dokumente in voll editierbare InDesign . PDF2ID 3.5.5 build 45798 Full Crack 2id V3 5 – agnoleggio.it. PDF2ID Professional V3.torrentl – BLOG DE SAMUEL. CASTILLO R. 2id V3 5 Torrent 2id V3 5 – xdd.com.au . Recosoft ships PDF2ID v3.5 Convert PDF & XPS to InDesign CS6.. PDF2ID 3.5.5 Warez Crack Serial Keygen Full. windows 10 v1607 light,. Special offer just for InDesignSecrets listeners, until Sept 30, get 25% discount off PDF2ID v3.5 or ID2Office v1.5 with code ID_Secrets . pdf2id 3.5 full PDF2ID Professional v d0c515b9f4 No, you can not re-create PDF Tagged pages and re-tag them with InDesign tags.Recosoft PDF2ID Professional V3.5.2009 pdf2id 3.5 full PDF2ID 3.5 can export to CS5.3 & CS5.4, if you are running CS5.x version and your supported version of InDesign is CS5.x then, you can simply upgrade to InDesign CS5.4 and run pdf2id 3.5 full Cracks 0 to 10.pdf2id 3.5 crack Wallpapers of the following sizes : ( 1280×720, 1920×1080 ) available in both HD and Standard definition.PDF2ID for Mac OS X or Windows is a free PDF to InDesign plug-in designed to make converting your PDFs to editable InDesign files as easy as click-and-drag. PDF2ID for Mac OS X or Windows is a free PDF to InDesign plug-in designed to make converting your PDFs to editable InDesign files as easy as click-and-drag. PDF2ID for Mac OS X or Windows is a free PDF to InDesign plug-in designed to make converting your PDFs to editable InDesign files as easy as click-and-drag. PDF2ID for Mac OS X or Windows is a free PDF to InDesign plug-in designed to make converting your PDFs to editable InDesign files as easy as click-and-drag.Q: How can I use CSS to position an image with a different hover state? I’m working on a mobile navigation bar and I’d like for the image in the navigation item to be the background image for that li when the li itself is hovered and move to an image representing that li without a hover on the li when that li is hovered and not the ‘image’. Home About Us Menu Services Contact #mobileNav { position:absolute; bottom
{}
## Problem 1 – nah by Ok, I know problem 1 has been solved several times. But I’ve continued messing with it, and various versions of my programs, and comparing run-times, and so I thought I’d share. To run my python scripts from the command line, and do performance testing, it’s helpful to have the scripts take arguments. The natural arguments for problem 1 are the upper bound and the list of divisors. So I’d call my program as “python script.py 1000 3 5”. This lets me quickly change the upper bound, or the list of divisors, outside of the code. So, how do I get at those variables in python? I found out from this page at diveintopython.org (which I should probably poke around more). The first solution I’d like to test uses the idea of my first solution: make a set representing all the multiples, and then simply sum the set. My code is: ```import sys, time from math import ceil t = time.time() # the first argument is the upper bound (excluded from the sum) # python treats all the arguments as strings, so we have to convert to ints bound = int(sys.argv[1]) - 1 # the remaining arguments are the divisors, convert them all to ints using map # i like the "slice notation" python uses for sub-lists divs = map(int, sys.argv[2:]) multiples = set() for n in divs: s = set(n*i for i in xrange(1, bound//n + 1)) multiples |= s # faster than = multiples | s t = time.time() - t print sum(multiples), "in", t, "sec" ``` I should mention that Chris’ recent code pointed out to me that you can just subtract one from the upper bound, and then do no any extra checking about including it or not. Next up, we use the idea Chris has explained about directly computing the sum, instead of messing about making lists of multiples. We should expect this to be an essentially constant-time algorithm (in the size of the upper bound, anyway, not the number of divisors) since there is no looping involving the bound. Since I want to allow any number of divisors, I’ll have to iterate over subsets of the divisors, which are conveniently in 1-1 correspondence with with binary strings with length number-of-divisors. Anyway, here’s what I came up with: import time,sys t = time.time() # reduce(gcd,list) to get gcd of more than 2 numbers def gcd(a,b): if(b == 0): return a return gcd(b,a % b) # reduce(lcm,list) to get lcm of more than 2 numbers def lcm(a,b): return a*b / gcd(a,b) # 1 + 2 + … + n def sumton(n): return n * (n + 1) // 2 # sum all multiples of n up to and including b def summultiples(b,n): return n * sumton(b // n) # convert int to binary string, from # http://code.activestate.com/recipes/219300/ # sweet lambda function bstr_ = lambda n: n>0 and bstr_(n>>1)+str(n&1) or ” # convert n to binary string, with width at least w # padded on the left with 0s def tobin(n, w): base = bstr_(n) spaces = w – len(base) if(spaces < 0):          spaces = 0      return spaces*"0" + base # nice syntax... spaces copies of "0" # first argument is the upper bound (excluded) bound = int(sys.argv[1]) - 1 # the remaining arguments are the list of divisors divs = map(int,sys.argv[2:]) sum = 0 # for every non-empty subset of divisors for n in xrange(1,2**len(divs)):      bstr = tobin(n, len(divs))      height = bstr.count('1')      sign = (-1)**(height - 1) # appropriate minus signs      thesedivs = [] # represents this subset of divisors      for p in xrange(1,len(divs) + 1): # let's play with negative string indexing # making the 2^p digit correspond to the p-th divisor in divs if(bstr[-p] == '1'): thesedivs.append(divs[p - 1]) sum += (sign * summultiples(bound,reduce(lcm,thesedivs))) t = time.time() - t print sum, "in", t, "sec" [/sourcecode] This entire solution, by the way, can be done in very few lines with sage, since it has more built-in commands. More on that another day though, I guess. Anyway, I then did some performance testing, for grins. For bounds between 1000 and 10000, in multiples of 500 (so, 1000, 1500, 2000, ... , 10000), I ran each of the above scripts 50 times, and calculated the average runtime. I'd post my shell script here, but it's nasty. If I continue trying performance testing, and clean it up a bit, perhaps I'll write a post. In the mean time... So I took all the numbers out, and played with the Google Charts API for a bit, and came up with a decent graph: Performance Test Comparison I’m tired of messing with it, so I’ll just tell you here that the scale on the $x$-axis is thousands, and the scale on the $y$-axis is milliseconds. Now that I’ve done this once, it might go quicker in the future, but don’t count on seeing too many more of these from me. One final word – a wordpress tip I just stumbled upon accidentally. After doing the “sourcecode” tag, and copying your program in, if you click the “HTML” button toward the top of the editor box, and then click back on “Visual”, things work out more nicely, in terms of editing the sourcecode. At least, they did for me. Update 20090518: Corrected the lack of links to Chris’ post. Advertisements Tags: , ### 2 Responses to “Problem 1 – nah” 1. Performance Testing « Leonhard Euler’s Flying Circus Says: […] Each of the scripts (that I want to test anyway) is set up to take command line arguments (see my other post for how to do that). The script below assumes that the argument that is changing comes first, and […] 2. sumidiot Says: In the second solution above, instead of getting height = bstr.count(1), and then calculating the sign (lines 46-47), it might more sense to determine sign as (-1)**(len(thesedivs)-1) just after the if statement in the for loop (so, between lines 53 and 54). This isn’t a big difference, but it seems like since you’re already looping through the string looking for 1s, you’re basically redoing the ‘bstr.count’ calculation.
{}
# Properties Label 2500.1.d.a Level $2500$ Weight $1$ Character orbit 2500.d Analytic conductor $1.248$ Analytic rank $0$ Dimension $4$ Projective image $D_{5}$ CM discriminant -4 Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$2500 = 2^{2} \cdot 5^{4}$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 2500.d (of order $$2$$, degree $$1$$, not minimal) ## Newform invariants Self dual: no Analytic conductor: $$1.24766253158$$ Analytic rank: $$0$$ Dimension: $$4$$ Coefficient field: $$\Q(i, \sqrt{5})$$ Defining polynomial: $$x^{4} + 3 x^{2} + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{13}]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 100) Projective image: $$D_{5}$$ Projective field: Galois closure of 5.1.6250000.1 ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + \beta_{3} q^{2} - q^{4} -\beta_{3} q^{8} - q^{9} +O(q^{10})$$ $$q + \beta_{3} q^{2} - q^{4} -\beta_{3} q^{8} - q^{9} -\beta_{1} q^{13} + q^{16} + ( -\beta_{1} - \beta_{3} ) q^{17} -\beta_{3} q^{18} + \beta_{2} q^{26} + ( 1 + \beta_{2} ) q^{29} + \beta_{3} q^{32} + ( 1 + \beta_{2} ) q^{34} + q^{36} + ( -\beta_{1} - \beta_{3} ) q^{37} + \beta_{2} q^{41} - q^{49} + \beta_{1} q^{52} -\beta_{1} q^{53} + ( \beta_{1} + \beta_{3} ) q^{58} + \beta_{2} q^{61} - q^{64} + ( \beta_{1} + \beta_{3} ) q^{68} + \beta_{3} q^{72} -\beta_{1} q^{73} + ( 1 + \beta_{2} ) q^{74} + q^{81} + \beta_{1} q^{82} + ( 1 + \beta_{2} ) q^{89} + ( -\beta_{1} - \beta_{3} ) q^{97} -\beta_{3} q^{98} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q - 4q^{4} - 4q^{9} + O(q^{10})$$ $$4q - 4q^{4} - 4q^{9} + 4q^{16} - 2q^{26} + 2q^{29} + 2q^{34} + 4q^{36} - 2q^{41} - 4q^{49} - 2q^{61} - 4q^{64} + 2q^{74} + 4q^{81} + 2q^{89} + O(q^{100})$$ Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 3 x^{2} + 1$$: $$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2} + 1$$ $$\beta_{3}$$ $$=$$ $$\nu^{3} + 2 \nu$$ $$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$\beta_{2} - 1$$ $$\nu^{3}$$ $$=$$ $$\beta_{3} - 2 \beta_{1}$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/2500\mathbb{Z}\right)^\times$$. $$n$$ $$1251$$ $$1877$$ $$\chi(n)$$ $$-1$$ $$-1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 2499.1 1.61803i − 0.618034i 0.618034i − 1.61803i 1.00000i 0 −1.00000 0 0 0 1.00000i −1.00000 0 2499.2 1.00000i 0 −1.00000 0 0 0 1.00000i −1.00000 0 2499.3 1.00000i 0 −1.00000 0 0 0 1.00000i −1.00000 0 2499.4 1.00000i 0 −1.00000 0 0 0 1.00000i −1.00000 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 4.b odd 2 1 CM by $$\Q(\sqrt{-1})$$ 5.b even 2 1 inner 20.d odd 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 2500.1.d.a 4 4.b odd 2 1 CM 2500.1.d.a 4 5.b even 2 1 inner 2500.1.d.a 4 5.c odd 4 1 2500.1.b.a 2 5.c odd 4 1 2500.1.b.b 2 20.d odd 2 1 inner 2500.1.d.a 4 20.e even 4 1 2500.1.b.a 2 20.e even 4 1 2500.1.b.b 2 25.d even 5 2 500.1.h.a 8 25.d even 5 2 2500.1.h.e 8 25.e even 10 2 500.1.h.a 8 25.e even 10 2 2500.1.h.e 8 25.f odd 20 2 100.1.j.a 4 25.f odd 20 2 500.1.j.a 4 25.f odd 20 2 2500.1.j.a 4 25.f odd 20 2 2500.1.j.b 4 75.l even 20 2 900.1.x.a 4 100.h odd 10 2 500.1.h.a 8 100.h odd 10 2 2500.1.h.e 8 100.j odd 10 2 500.1.h.a 8 100.j odd 10 2 2500.1.h.e 8 100.l even 20 2 100.1.j.a 4 100.l even 20 2 500.1.j.a 4 100.l even 20 2 2500.1.j.a 4 100.l even 20 2 2500.1.j.b 4 200.v even 20 2 1600.1.bh.a 4 200.x odd 20 2 1600.1.bh.a 4 300.u odd 20 2 900.1.x.a 4 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 100.1.j.a 4 25.f odd 20 2 100.1.j.a 4 100.l even 20 2 500.1.h.a 8 25.d even 5 2 500.1.h.a 8 25.e even 10 2 500.1.h.a 8 100.h odd 10 2 500.1.h.a 8 100.j odd 10 2 500.1.j.a 4 25.f odd 20 2 500.1.j.a 4 100.l even 20 2 900.1.x.a 4 75.l even 20 2 900.1.x.a 4 300.u odd 20 2 1600.1.bh.a 4 200.v even 20 2 1600.1.bh.a 4 200.x odd 20 2 2500.1.b.a 2 5.c odd 4 1 2500.1.b.a 2 20.e even 4 1 2500.1.b.b 2 5.c odd 4 1 2500.1.b.b 2 20.e even 4 1 2500.1.d.a 4 1.a even 1 1 trivial 2500.1.d.a 4 4.b odd 2 1 CM 2500.1.d.a 4 5.b even 2 1 inner 2500.1.d.a 4 20.d odd 2 1 inner 2500.1.h.e 8 25.d even 5 2 2500.1.h.e 8 25.e even 10 2 2500.1.h.e 8 100.h odd 10 2 2500.1.h.e 8 100.j odd 10 2 2500.1.j.a 4 25.f odd 20 2 2500.1.j.a 4 100.l even 20 2 2500.1.j.b 4 25.f odd 20 2 2500.1.j.b 4 100.l even 20 2 ## Hecke kernels This newform subspace is the entire newspace $$S_{1}^{\mathrm{new}}(2500, [\chi])$$. ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$( 1 + T^{2} )^{2}$$ $3$ $$T^{4}$$ $5$ $$T^{4}$$ $7$ $$T^{4}$$ $11$ $$T^{4}$$ $13$ $$1 + 3 T^{2} + T^{4}$$ $17$ $$1 + 3 T^{2} + T^{4}$$ $19$ $$T^{4}$$ $23$ $$T^{4}$$ $29$ $$( -1 - T + T^{2} )^{2}$$ $31$ $$T^{4}$$ $37$ $$1 + 3 T^{2} + T^{4}$$ $41$ $$( -1 + T + T^{2} )^{2}$$ $43$ $$T^{4}$$ $47$ $$T^{4}$$ $53$ $$1 + 3 T^{2} + T^{4}$$ $59$ $$T^{4}$$ $61$ $$( -1 + T + T^{2} )^{2}$$ $67$ $$T^{4}$$ $71$ $$T^{4}$$ $73$ $$1 + 3 T^{2} + T^{4}$$ $79$ $$T^{4}$$ $83$ $$T^{4}$$ $89$ $$( -1 - T + T^{2} )^{2}$$ $97$ $$1 + 3 T^{2} + T^{4}$$
{}
Function for arrange journal club schedule jc_tombola( data, members, group, gr_lvl, status, st_lvl, frq, date, seed = NULL ) Arguments data Data frame withe members and their information. Columns with the members names. Column for arrange the group. Levels in the groups for the arrange. See details. Column with the status of the members. Level to confirm the assistance in the JC. See details. Number of the day for each session. Date when start the first session of JC. Number for replicate the results (default = date). Value data frame with the schedule for the JC Details The function could consider n levels for gr_lvl. In the case of two level the third level will be both. The suggested levels for st_lvl are: active or spectator. Only the active members will enter in the schedule.
{}
### Archives For general relativity So Leonard Susskind publishes a paper on arXiv # Dear Qubitzers, GR=QM Which of course is what I have been saying all along. Of course Susskind’s paper is actually ‘of course’ not about QM emerging from GR, which is what I believe, and have good reason to follow up on. Dear Qubitzers, GR=QM? Well why not? Some of us already accept ER=EPR [1], so why not follow it to its logical conclusion? It is said that general relativity and quantum mechanics are separate subjects that don’t fit together comfortably. There is a tension, even a contradiction between them—or so one often hears. I take exception to this view. I think that exactly the opposite is true. It may be too strong to say that gravity and quantum mechanics are exactly the same thing, but those of us who are paying attention, may already sense that the two are inseparable, and that neither makes sense without the other. The ‘paper’ (perhaps letter is a better name), has made the rounds/  Not Even Wrong, Instead of that happening, it seems that the field is moving ever forward in a post-modern direction I can’t follow. Tonight the arXiv has something new from Susskind about this, where he argues that one should go beyond “ER=EPR”, to “GR=QM”. While the 2013 paper had very few equations, this one has none at all, and is actually written in the form not of a scientific paper, but of a letter to fellow “Qubitzers”. On some sort of spectrum of precision of statements, with Bourbaki near one end, this paper is way at the other end. While Woit’s nemesis Lubos Motl, Susskind also says lots of his usual wrong statements resulting from a deep misunderstanding of quantum mechanics – e.g. that "quantum mechanics is the same as a classical simulation of it". A classical system, a simulation or otherwise, can never be equivalent to a quantum mechanical theory. The former really doesn't obey the uncertainty principle, allows objective facts; the latter requires an observer and is a framework to calculate probabilities of statements that are only meaningful relatively to a chosen observer's observations. Sabine Hossenfelder put it visually on Twitter: My take is about the same as these popular bloggers. Don’t really think much of it. Except the title. QM can, I believe, emerge from Einstein’s General Relativity, in much the same way that Bush and Couder’s bouncing drops can display quantum behaviour. Its ridiculous that 11 dimensions and sparticles have hundreds of times more study than fundamental emergent phenomena. Emergence is the way to go forward. You don’t need a new force/particle/dimension/brane to make fundamentally new physics from what we already have in electromagnetism and general relativity. See the search links on the side of this blog for some recent papers in these areas. I have been reading up on the trans-Planckian problem with the black hole evaporation process. ##### Here is the problem. An observer far away from a black hole sees photons of normal infared or radio wave energies coming from a black hole (i.e. << 1eV). If one calculates the energies that these photons should have once they are in the vicinity of the black hole horizon, the energy is becomes high – higher than the Planck energy, exponentially so. Of course if we ride with the photon down to the horizon, the photon blue shifts like mad, going ‘trans-Planckian’ – i.e. having more energy than the Planck energy. Looked at another way: if a photon starts out at the horizon, then we won’t ever see it as a distant observer. So it needs to start out just above the horizon where the distance from the horizon is given by the Heisenberg uncertainty principle, and propagate to us. The problem is that the energy of these evaporating photons must be enormous at this quantum distance from the horizon – not merely enormous, but exponentially enormous. A proper analysis actually starts the photon off in the formation of the black hole, but the physics is the same. Adam Helfer puts it well in his paper. Great clear writing and thinking. #### Trans–Planckian modes, back–reaction, and the Hawking process My take is simple. After reading Hefler’s paper plus others on the subject, I’m fairly convinced that black holes of astrophysical size (or even down to trillions of tons) do not evaporate. ### The math is good. The physics isn’t Lets get things straight here: the math behind Hawking evaporation is good: Hawking’s math for black hole evaporation is not in question. It should be emphasized that the problems uncovered here are entirely physical, not mathematical. While there are some technical mathematical concerns with details of Hawking’s computation, we do not anticipate any real difficulty in resolving these (cf. Fredenhagen and Haag 1990). The issues are whether the physical assumptions underlying the mathematics are correct, and whether the correct physical lessons are being drawn from the calculations. Yet Hawking’s prediction of black hole evaporation is one of the great predictions of late 20th century physics. Whether black holes turn out to radiate or not, it would be hard to overstate the significance of these papers. Hawking had found one of those key physical systems which at once bring vexing foundational issues to a point, are accessible to analytic techniques, and suggest deep connections between disparate areas of physics. (Helfer, A. D. (2003). Do black holes radiate? Retrieved from https://arxiv.org/pdf/gr-qc/0304042.pdf) So its an important concept. In fact it so important that much of not only black hole physics but quantum gravity and cosmology all use or even depend on black hole evaporation. Papers with titles like “Avoiding the Trans-Planckian Problem in Black Hole Physics” abound. ### The trans-Planckian problem is indicative of the state of modern physics. There are so many theories in physics today that rely on an unreasonable extrapolation of the efficacy of quantum mechanics at energies and scales that are not merely larger than experimental data, but exponentially larger than we have experimental evidence for. Its like that old joke about putting a dollar into a bank account and waiting a million years – even at a few per cent interest your money will be worth more than the planet. A straightforward look at history shows that currency and banks live for hundreds of years – not millions. The same thing happens in physics – you can’t connect two reasonable physical states through an unphysical one and expect it to work. The trans-Planckian problem is replicated perfectly in inflationary big bang theory. The trans-Planckian problem seems like a circle the wagons type of situation in physics. Black hole evaporation now has too many careers built on it to be easily torn down. Torn down: To emphasize the essential way these high–frequency modes enter, suppose we had initially imposed an ultraviolet cut–off Λ on the in–modes. Then we should have found no Hawking quanta at late times, for the out–modes’ maximum frequency would be ∼ v′(u)Λ, which goes to zero rapidly. (It is worth pointing out that this procedure is within what may be fairly described as text–book quantum field theory: start with a cut–off, do the calculation, and at the very end take the cut–off to infinity. That this results in no Hawking quanta emphasizes the delicacy of the issues. In this sense, the trans–Planckian problem may be thought of as a renormalization–ambiguity problem.) Some may argue that other researchers have solved the trans-Planckian problem, but its just too simple a problem to get around. One way around it – which I assume is what many researchers think – is that quantum mechanics is somehow different than every other physical theory ever found, in that it has no UV, IR, no limits at all. In my view that is extremely unlikely. Quantum mechanics has limits, like every other theory. ##### Possible limits of quantum mechanics: • Zero point: Perhaps there is a UV cut – ( Λ ) . The quantum vacuum cannot create particles of arbitrarily large energies. • Instant collapse. While its an experimental fact that QM has non-local connections, the actual speed of these connections is only tested to a few times the speed of light. • Quantum measurement – Schrödinger’s cat is as Schrödinger initially intended it to be seen – as an illustration of the absurdity of QM in macroscopic systems. If there is a limit on quantum mechanics – that QM is like any other theory – a tool that works very well in some domain of physical problems, then many many pillars of theoretical physics will have to tumble, black hole evaporation being one of them. ##### Qingdi Wang, Zhen Zhu, and William G. Unruh How the huge energy of quantum vacuum gravitates to drive the slow accelerating expansion of the Universe It (I will call the paper WZU) has been discussed at several places: Phys.org, Sabine Hossenfelder at the Backreaction blog, So why talk about it more here? Well because its an interesting paper, and I think that many of the most interesting bits have been ignored or misunderstood (I’m talking here about actual physicists not the popular press articles). For instance here are two paragraphs from Sabine Hossenfelder Another troublesome feature of their idea is that the scale-factor of the oscillating space-time crosses zero in each cycle so that the space-time volume also goes to zero and the metric structure breaks down. I have no idea what that even means. I’d be willing to ignore this issue if the rest was working fine, but seeing that it doesn’t, it just adds to my misgivings. So with the first paragraph, Sabine is talking about the a(t, x) factor in the metric (see equation 23 in the paper). I think that she could be a little more up front here: a(t, x) goes to zero alright, but only in very small regions of space for very short times (I’ll come back to that later). So in reality the average of the a(t,x) over any distance/time Planck scale or larger determines an almost flat, almost Lambda free universe -> average(a(t,x)) –> the a(t) as per a FLRW metric. I guess Sabine is worried about those instants when there are singularities in the solution. I agree with the answer to this supplied in the paper: It is natural for a harmonic os- cillator to pass its equilibrium point a(t,x) = 0 at maximum speed without stopping. So in our solution, the singularity immediately disappears after it forms and the spacetime continues to evolve without stopping. Singularities just serve as the turning points at which the space switches. ...(technical argument which is not all that complicated)... In this sense, we argue that our spacetime with singularities due to the metric becoming degenerate (a = 0) is a legitimate solution of GR. As I said, more on that below when we get to my take on this paper. The second paragraph above from the Backreaction blog concerns the fact that the paper authors used semi classical gravity to derive this result. The other major problem with their approach is that the limit they work in doesn’t make sense to begin with. They are using classical gravity coupled to the expectation values of the quantum field theory, a mixture known as ‘semi-classical gravity’ in which gravity is not quantized. This approximation, however, is known to break down when the fluctuations in the energy-momentum tensor get large compared to its absolute value, which is the very case they study. They are NOT using a classical gravity coupled to the expectation values of the quantum field theory. Indeed, according to WZU and the mathematics of the paper they say: In this paper, we are not trying to quantize gravity. Instead, we are still keeping the spacetime metric a(t, x) as classical, but quantizing the fields propagating on it. The key difference from the usual semiclassical gravity is that we go one more step—instead of assuming the semiclassical Einstein equation, where the curvature of the spacetime is sourced by the expectation value of the quantum field stress energy tensor, we also take the huge fluctuations of the stress energy tensor into account. In our method, the sources of gravity are stochastic classical fields whose stochastic properties are determined by their quantum fluctuations. So I think that she has it wrong. In her reply to my comment on here blog she states that its still semiclassical gravity as they use the expectation values of the fluctuations (they don’t as you can see by the quote above or better by looking at the paper. See how the equation 29 talks about expectation values, but the actual solution does not use them ). She concludes her comment: “Either way you put it, gravity isn’t quantized.” I think that’s also fair appraisal of  the attitude of many people on reading this paper many people don’t like it because gravity is treated classically. ## Why I think the paper is interesting. #### Gravity is not quantized: get over it I think its interesting as their approach to connecting gravity to the quantum world is basically identical to my Fully Classical Quantum Gravity experimental proposal – namely that gravity is not quantized at all and that gravity couples directly to the sub-quantum fluctuations. Wang and co-authors apologize for the lack of a quantum theory of gravity, but that appears to me anyway as more of a consensus-towing statement than physics. Indeed, the way its shoved in at the start of section C seems like it is an afterthought. #### (Gravitational) Singularities are no big deal Singularities are predicted by many or (even all?) field theories in physics. In QED the technique of renormalization works to remove singularities (which are the same as infinities). In the rest of modern QFT singularities are only perhaps removed by renormalization. In other words quantum field theory blows up all by itself, without any help from other theories. Its naturally bad. The Einstein equations have a different behaviour under singular conditions. They are completely well behaved. Its only when other fields are brought in, such as electromagnetism or quantum field theory that trouble starts. But all on their own singularities are no big deal in gravity. So I don’t worry about the microscopic, extremely short lived singularities in WZU at all. #### Why it’s exciting We have WZU metric equation 23 ds2 = −dt2 +a2(t,x)(dx2 +dy2 +dz2) a(t,x) oscillates THROUGH zero to negative, but the metric depend on a^2, so we have a positive definite metric that has some zeros. These zeros are spread out quasi periodically in space and time. If one takes two points on the manifold (Alice and Bob denoted A & B), then the distance between A and B will be equivalent to the flat space measure (I am not looking at the A and B being cosmic scale distances apart in time or space, so its almost Minkowski). Thus imagine A and B being 1 thousand km apart. The scale factor a(t, x) averages to 1. Here is the exciting bit. While an arbitrary line (or the average of an ensemble of routes) from A -> B is measured as a thousand km, there are shorter routes through the metric. Much shorter routes. How short? Perhaps arbitrarily short. It may be that there is a vanishingly small set of paths with length ds = 0, and some number of paths with ds just greater than 0, all the way up to ‘slow paths’ that spend more time in a > 1 areas. Imagine a thread like singularity (like a cosmic string – or better a singularity not unlike a Kerr singularity where a >> m). In general relativity such a thread is of thickness 0, and the ergo region around it also tends to zero volume. One calculation of the tension on such a gravitational singularity ‘thread’ (I use the term thread as to not get confused with string theory) come out to a value of about 0.1 Newtons. A Newton of tension on something so thin is incredible. Such a thread immersed in the WZU background will find shorter paths – paths that spend more time in areas where a << 1, these paths being much more energetically favoured. There are also very interesting effects when such gravitational thread singularities are dragged through the WZU background. I think that this might be the mechanism that creates enough action to generate electromagnetism from pure general relativity only. A 2D slice at some time through ordinary WZU vacuum. The spots are places where a~2. The straight line from A to B has an average scale factor a of 1, while the wiggly path follows a ~ 0 and hence has an average scale factor of << 1. Note that these short paths are not unique, and there is little constraint for them to be even approximately straight. So these thread singularities thread their way through the frothy WZU metric and as such the distance a single such thread may measure between Alice and Bob may be far far less than the flat space equivalent. It seems to me that one could integrate the metric as given in WZU equation 23 with a shortest path condition and come up with something. Here is one possible numerical way: start out with a straight thread from A to B. Then relax the straight line constraint, assign a tension to the thread, and see what the length of the thread after a few thousand iterations, where at each iteration, each segment allows itself to move toward a lower energy state (i.e. thread contraction). This opens up: ##### Quantum non-locality Realist, local quantum mechanics is usually thought of requiring  on having some dependency on non-local connections, as quantum experiments have shown. This shortcut path may be an answer to the need for non-local connections between particles, i.e. a mechanism for entaglement, a mechanism for Einstein’s “spooky action at a distance”. ##### Faster than light communication. Its always fun to see if there are realistic methods where one might beat the speed limit on light. It seems that worm hole traversal has been one of the favourites to date. I think that the WZU paper points at another mechanism – the fact that there exist shorter paths through the sub-quantum general relativistic froth of WZU. How might one construct a radio to do this? Entangled particles, particles that follow the zeros of a(t, x) preferentially, etc etc. One could imagine a brute force method to test this where huge pulses of energy are transmitted through space at random intervals. Perhaps a precursor signal could be measured at the detector, where some of the energy takes a short path through the WZU metric. Emergent quantum mechanics comes in many forms: stochastic electrodynamics ( Ana María Cetto) , de Broglie – Bohmian mechanics (John W M Bush) , thermal models ( Gerhard Groessing ) etc. In many of these forms of emergent quantum mechanics, particles have a physical existence and experience sub quantal movement. The paper I have just posted looks at the gravitational consequences of this sub quantal motion. An interesting finding is that while a classical Bohr hydrogen atom has a lifetime of about 10^-11 seconds, it would take that same atom 10^40 seconds or so to radiate away a few eV of energy. This indicates that the stability of the atoms is not an indication that gravity needs to be quantized, which is antithetical to Einstein in 1916: • “…Nevertheless, due to the inner-atomic movement of electrons, atoms would have to radiate not only electro-magnetic but also gravitational energy, if only in tiny amounts. As this is hardly true in Nature, it appears that quantum theory would have to modify not only Maxwellian electrodynamics, but also the new theory of gravitation.” – Einstein, 1916 Einstein it would seem was wrong on the gravtitational side of this. The paper looks at possible ways to see these tiny emissions (nuclear scale emissions are higher) and thus lays out a quantum gravity experiment achievable with today’s technology. The experimental parameter space. Most important thing to note is that this is a quantum gravity experiment with an achievable parameter space! Here is the paper… Also see these references… In this two page paper, I look at how the relationship between the dimensions of a Kerr singularity and the strength of the electric Coulomb effect compare. and try to find your friend at the other end.” — Leonard Susskind In this talk Leonard Susskind gives a convincing argument as to why he thinks that ER == EPR , where ER denotes an Einstein – Rosen Bridge (aka wormhole) and EPR is the Einstein Podolsky Rosen paper (essentially entanglement). Leonard draws three entangled pairs of particles on the chalkboard, (image its not merely 3 by 3e40) and then collapse the left and right down to black holes, then the entaglement must continue, and thus ER == EPR Take a ring of rotating matter. No matter what frequency it rotates at, there is no General Relativistic waves emanating from it. Now assume that the matter starts to clump up into two balls. NOW we have GR radiation. Now run the camera in reverse. What we have is an object that aggressively reflects (exchanges) GR radiation with other similar objects at the same frequency. The rings I am talking about are the mass of an electron and very very small. Take a run of the mill graviton detector: (Not yet built, nor would they be easy to build!). Put it on a table top, on this planet. Say its detecting 1,000 gravitons per second. Now pull the table out – quickly but smoothly. How may gravitons will it see on its 0.5 second trip to the floor? According to the equivalence principle: When it drops off the shelf, it is supposed to stop seeing gravitons. According to QFT – the device is still in a gravitational field, so it will see about 500 gravitons on its half second journey. Note that the speed of the detector has not changed appreciably when it first starts to fall. “All experimental quantities are unchanged”. This simple thought experiment lies at the Thoughts: Turbulence in GR is linear and hence does not give rise to cumulative gravitational effects. Indeed, the power that can be transmitted using GR as a factor of the ‘gravity caused’ is immense. For instance: at the power transfer of energy at the Schwinger limit (here I assume 3×1029 watts/m2), the non linear effect – the gravity term is very low. Say (see http://arxiv.org/pdf/1007.4306v3.pdf) 3×1029 watts/m2 (at optical freq).\ Consider a 1 metre3 box with perfect mirrors at the schwinger limit – how much does that much radiation weigh? I get 1×1021 Joules per cubic metre at any one time, so that’s 11.1 tonnes. (http://www.wolframalpha.com/input/?i=10%5E21Joules%2F%28c%5E2%29) That seems like a lot of mass, but 11 tonnes in a cubic metre is not going to alter the static gravitational field much even in the low field limit like that of the earth. That 11 tonne figure is interesting, as it is also the density of lead. Its strange (or not) that the Schwinger limit is also the density of normal matter. From the book I am reading now: ( Fields of Color: The theory that escaped Einstein  — Rodney A. Brooks) “… spin is an abstract mathematical concept that is related to the number of field components and how they change when viewed at from different angles. The more field components, the higher the spin.” 0  ,  1/2  ,  1  , 2  are the spin values so gravity has more field components. Can we mimic a field with a lower number of field components with one that has more field components? Yes. So we generate everything from gravity. Einstein was of course worried about the electromagnetic radiation emitted from a classical Bohr atom. But I have also learned that he was worried about the GR radiation from that same atom that he claimed was ‘not observed’. I think that the waves would be of very low energy but I should work that out. (re – replenishment from the turbulent gravity). Random Q: Were there about 5 times TOO MANY GALAXIES in the early universe – which would jive with my thought that dark matter is matter gone dark. In the early Universe matter was packed too tightly for there to be any dark stuff, so there was more galaxy formation. A: Possibly see for instance – http://astronomynow.com/2015/11/21/hubble-survey-reveals-early-galaxies-were-more-efficient-at-making-stars/ Random Q: Frame dragging. Would any other physics change over one of Tamjar’s rotating superconductors where he sees anomalous gravitational effects – i.e. look at decay rates of common isotopes, etc. Random Q: There is the experiment in Italy where they see decay rates changing as the year advances, which is anomalous. Wonder if some frame dragging can take care of that. ### Can a sub-quantum medium be provided by General Relativity? Thomas C Andersen, PhD As a personal note of celebration, Art McDonald, the director of the Sudbury Neutrino Observatory has won the Nobel Prize in Physics. I worked on SNO for 8 years for my masters and PhD. The Sudbury Neutrino Observatory also shared the Breakthrough prize in Fundamental Physics! The breakthrough prize is awarded to the whole collaboration (26o or so of us). It was a real treat to work on the neutrino observatory. In PDF as a paper, or in as a poster I presented at EmQM15 in Vienna, published in IOP physics. http://iopscience.iop.org/article/10.1088/1742-6596/701/1/012023 tom@palmerandersen.com, Ontario, Canada. (Dated: October 19, 2015) Emergent Quantum Mechanics (EmQM) seeks to construct quantum mechanical theory and behaviour from classical underpinnings. In some formulations of EmQM a bouncer- walker system is used to describe particle behaviour, known as sub-quantum mechanics. This paper explores the possibility that the field of classical general relativity (GR) could supply a sub-quantum medium for these sub-quantum mechanics. Firstly, I present arguments which show that GR satisfies many of the a priori requirements for a sub-quantum medium. Secondly, some potential obstacles to using GR as the underlying field are noted, for example field strength (isn’t gravity a very weak force?) and spin 2. Thirdly, the ability of dynamical exchange processes to create very strong effective fields is demonstrated through the use of a simple particle model, which solves many of the issues raised in the second section. I conclude that there appears to be enough evidence to pursue this direction of study further, particularly as this line of research also has the possibility to help unify quantum mechanics and general relativity. ### The Sub-quantum Medium In emergent QM the sub-quantum medium is the field out of which quantum behaviour emerges. Most, if not all EmQM theories published to date do not explicitly define the nature of the sub- quantum medium, instead quite reasonably they only assume that some underlying field exists, having some minimum set of required properties, for instance some sort of zero point field interac- tion. There have of course been investigations into the physical make up of a sub-quantum medium. Perhaps the most investigated possible source is stochastic electrodynamics (SED)[5]. Investigated on and off since the 1960s, SED posits the existence of a noisy isotropic classical radiation field as the zero point field (ZPF). stochastic electrodynamics as a sub-quantum media has many desirable properties. As an example of progress in stochastic electrodynamics Nieuwenhuizen and Liska[12] have recently used computer simulation techniques to build an almost stable hydrogen atom. Yet classical electrodynamics has a few problems as the sub-quantum medium. Davidson points out that ”A particle in SED gains or loses energy due to interaction with the zero point field. Atoms tend to spontaneously ionize in SED as a consequence. … The spectral absorp- tion and emission lines are too broad in simple calculations published so far to come anywhere close to fitting the myriad of atomic spectral data.”[4]. Other sub-quantum medium proposals include Brady’s compressible inviscid fluid – an entirely new classical field that is posited to underpin quantum mechanics and electromagnetism.[1] This paper proposes a sub-quantum medium that is already experimentally confirmed and is somewhat surprisingly stronger and more flexible than usually thought – general relativity (GR). Using GR as the sub-quantum medium as presented here assumes only classical GR. Other pro- posals that are similar in some ways are Wheeler’s geons of 1957 – constructed of source free electromagnetic fields and gravity under the laws of standard QM[11] and Hadley’s 4-geons[8]. Hadley’s proposal is perhaps the most similar to that here, but Hadley assumes the independent reality of an electromagnetic field. This paper instead uses only GR as the fundamental field. General relativity has some qualities that lend itself to consideration as a sub-quantum medium: 1. Frictionless (inviscid): The movement of objects through empty space is observed to be frictionless, as waves and objects can travel long distances without measurable hindrance. GR’s ether (such that it is) behaves as an inviscid media in its linear regime, allowing for this. Importantly, there is friction in situations such as Kerr hole frame dragging. 2. Covariant: Manifestly so. 3. Non Linear: This non – linearity allows for a rich variety of behaviour at small scales – a minimally explored, flexible platform to construct particles. 4. Coupling: General relativity couples to all material, uncharged or charged. #### Potential Problems How can general relativity form a basis for quantum mechanics, given the following: 1. Gravity is weak. GR is often thought of as a weak force, after all the electromagnetic force between two electrons is some 1042 times that of their gravitational attraction! But for the purposes of a sub-quantum media we are interested in large energy transfers (e.g. Grssing’s[7] thermal ZPE environment), not the weak effects of gravitational at- traction. Instead of 0Hz attraction effects, consider gravitational waves. Looking at optical frequencies (1014Hz), for GR the maximum energy transfer rate be- fore non linear effects start to dominate is tremendously high – about 1065<sup>W/m2. Compare that to electromagnetism, where we have to appeal to something like the Schwinger limit which is only 1030W/m2. Thus GR has plenty of room to host strong effects. 2. Gravity has a weak coupling. In order to model a quantum system (say a hydrogen atom), we require the quantum forces to be much stronger than the electromagnetic forces. Yet the coupling of gravity to the electron is much weaker than even the electromagnetic force. The solution to this problem lies in realizing that gravity can couple not only through ’0Hz’ effects but also through the exchange of wave energy. The Possible Mechanisms section below outlines how this could happen. 3. Gravity is quadrupole (spin 2). If we are to also generate EM from GR, we require a spin 1 field to emerge. Emergence is the key – underlying fields can give rise to apparent net fields of different spin. E.g. Monopole gravitational waves[9]. 4. Bell’s theorem and hidden variables. Using GR as the underlying medium to emerge quantum mechanics from would seem to have to satisfy Bell’s inequalities – and thus disagree with current QM theory. Maldacena and Susskind’s EP = EPR paper[10] is an example of a solution to this. #### Possible Mechanisms Here I investigate some consequences of purely classical geometric particle models that are the mass of the electron in a universe where the only field is classical general relativity. The exact micro structure of a particle is not of concern here, instead I look at some tools and building blocks with which to build elementary particles from nothing more than classical GR. An electron like particle is modelled as a small region of space which has some geometric microstructure that results in a particle with the correct mass and spin. I will point out here that a Kerr solution with the mass and spin of an electron happens to have a (naked) singularity at virtually the Compton radius (1/13 the Compton wavelength). Whatever the exact microstructure of an elementary particle, there is certainly extensive frame dragging occurring. Frame dragging is the ’handle’ to which gravitational wave energy exchange can grip. As Brito et al. start their comprehensive ’Superradiance’ paper: Superradiance in GR was introduced by Press and Teukolsky’s 1972 paper Floating Orbits, Super- radiant Scattering and the Black-hole Bomb[13]. This paper posits that EmQM’s sub-quantum ZPF might be a run away superradiance effect (limited by non linear mechanics). Is the universe a black hole bomb? This superradiant (and highly absorbing – see figure 1) energy exchange of the particle with its surroundings causes the particle to be subjected to huge forces – superradiance for example allows for a substantial fraction of the mass of a rotating black hole to change over time scales a few times the light travel time across the of the hole. The recent paper by East et al. studies black holes undergoing superradiance using a numerical method.[6]. It seems that the superradiance is on a knife edge with absorption – these effects happen at only slightly different frequencies. While the time scale for a black hole with the mass of an electron is a tiny 10−65s, it seems reasonable to assume that the frequency for superradiance is tied to the distance scales involved in the particles structure, so there could be superradiant effects happing on different timescales. For instance, an effect at 10−65s could be holding the particle together, while the forces of EM and the actions of QM might take place using waves closer to the electron Compton frequency. Look now at a Compton frequency superradiant process. We have an energy exchange of some fraction of the mass of the electron happening at 1.2×1020Hz. The maximum force an effect like this can produce on an electron mass particle is of order 0.01 Newtons! Forces like this are surely strong enough to control the movement of the electron and phase lock it, giving rise to the sub-quantum force. #### FIG. 1: From East[6]: Top: mass change over time, for incident gravitational waves with three different frequencies. ω0M = 0.75 is superradiant, while ω0M = 1 shows complete absorption. Bottom – shows the effect of the wave on the shape of the horizon – so the entire wave packet can be visualized. There is also a mechanism by which electromagnetic effects can emerge from such energy ex- change. See Brady[2] section 4 for one simple method of calculating an electromagnetic force from mass exchange. ### Discussion The sub-quantum medium, whatever it is, has to behave so that quantum mechanics can arise from it. I hope that this paper has shown that General relativity covers at least some of the requirements for a sub-quantum medium. In order to fully test this idea, there might likely need to be an actual geometrical model of the electron found. The techniques of numerical general relativity could be the best tool to study these interactions in detail. If the pursuit of an emergent quantum mechanics is to prove fruitful, then the idea that a field like general relativity does not hold on the microscale may have to be re-considered, as with EmQM there is no overarching ’quantum regime’. With general relativity still on the stage at 10−17m, Occam’s razor perhaps suggests that we prove that general relativity is not the sub-quantum medium before a new field is invented. 1. [1]  Robert Brady. The irrotational motion of a compressible inviscid fluid. page 8, jan 2013. 2. [2]  Robert Brady and Ross Anderson. Why bouncing droplets are a pretty good model of quantummechanics. jan 2014. 3. [3]  Richard Brito, Vitor Cardoso, and Paolo Pani. Superradiance, volume 906 of Lecture Notes in Physics.Springer International Publishing, Cham, jan 2015. 4. [4]  Mark P. Davidson. Stochastic Models of Quantum Mechanics A Perspective. In AIP ConferenceProceedings, volume 889, pages 106–119. AIP, oct 2007. 5. [5]  L. de la Pena and A. M. Cetto. Contribution from stochastic electrodynamics to the understanding ofquantum mechanics. page 34, jan 2005. 6. [6]  William E. East, Fethi M. Ramazanolu, and Frans Pretorius. Black hole superradiance in dynamicalspacetime. Physical Review D, 89(6):061503, mar 2014. 7. [7]  G. Gr ̈ossing, S. Fussy, J. Mesa Pascasio, and H. Schwabl. Implications of a deeper level explanation ofthe deBroglieBohm version of quantum mechanics. Quantum Studies: Mathematics and Foundations,2(1):133–140, feb 2015. 8. [8]  Mark J. Hadley. A gravitational explanation for quantum theory non-time-orientable manifolds. InAIP Conference Proceedings, volume 905, pages 146–152. AIP, mar 2007. 9. [9]  M. Kutschera. Monopole gravitational waves from relativistic fireballs driving gamma-ray bursts.Monthly Notices of the Royal Astronomical Society, 345(1):L1–L5, oct 2003. 10. [10]  J. Maldacena and L. Susskind. Cool horizons for entangled black holes. Fortschritte der Physik,61(9):781–811, sep 2013. 11. [11]  CharlesWMisnerandJohnAWheeler.Classicalphysicsasgeometry.AnnalsofPhysics,2(6):525–603,dec 1957. 12. [12]  TheoM.NieuwenhuizenandMatthewT.P.Liska.SimulationofthehydrogengroundstateinStochasticElectrodynamics. page 20, feb 2015. 13. [13]  WILLIAM H. PRESS and SAUL A. TEUKOLSKY. Floating Orbits, Superradiant Scattering and theBlack-hole Bomb. Nature, 238(5361):211–212, jul 1972. I have been thinking about frame dragging and faster than light travel for a few days, and then about the fact that quantum collapse seems to take place ‘instantly’ (faster than light). So then I read about the photon size for a 1MHz radio wave which is 300 metres – quite large. So this huge wave has to refract as a wave and yet somehow instantly collapse into a very small area to be absorbed? Instantly? Insanity! Wild thought: Frame dragging faster than light and gravitational shock waves to the rescue! Answer: Collapse is a shockwave that causes frame dragging, allowing for ‘instant’ effects to happen (also EPR). Frame dragging can in principle be used to travel faster than the speed of light. This is a known scientific fact that is thought to be non possible in practice due to all sorts of limitations. Science fiction of course loves it. So a soliton forms and sweeps energy out of the wave and into the reception antenna. If we could control this soliton collapse – we could perhaps harness it to perform faster than light communication and travel. The soliton ‘shock wave’ is composed of gravity (as is light and everything else). It would have to have some very specific configuration. Frame Dragging Frame dragging occurs with linear effects too. My thought experiment on this is through a Mach – like view point. If you are inside at the middle of a very long pipe, which starts to accelerate, you will be dragged along. If the pipe stops at some velocity, you will approach that velocity eventually. So space couples not to mass but to matter. If it just coupled to mass, you would not be able to tell if your neutron rope was moving or not. It couples instead to the actual bits of matter. What about circularly polarized gravitational waves – timed so that the squished part is always in front and the expansion is behind the particle? So that’s 90 degrees from direction of travel of the waves – but perhaps they can be entrained as a soliton solution. Soliton Would there be any consequences that we could measure? http://physics.stackexchange.com/questions/178545/maximum-power-transmitted-using-general-relativity-waves-cf-schwinger-limit For instance, there is an upper bound of the amount of EM energy that can be poured through a square mm of area – not predicted by Maxwell’s Eqn’s of course, as they are linear, but by quantum field effects. If we instead look at how gravitational energy we can pass through that same square mm, is it the same number of joules/sec? http://en.wikipedia.org/wiki/Schwinger_limit Well there are a few problems with the Schwinger limit too: "A single plane wave is insufficient to cause nonlinear effects, even in QED.[4] The basic reason for this is that a single plane wave of a given energy may always be viewed in a different reference frame, where it has less energy (the same is the case for a single photon)." So according to QED, we can actually make a laser of any power – and as long as its in a vacuum, there are no non linear effects. Can that really be true? The Schwinger limit is about 2.3 E33 Watts/metre^2. I have calculated the limit of gravitational wave energy (which depends on frequency) to be P (max gravity waves) = 3/(5pi)*c^3/G*w^2, In Electromagnetism, QED says that the linearity of Maxwell’s equations comes to an end when field strengths approach the Schwinger limit. Its about 10^18 V/m. What is the corresponding formula for gravitational waves. Since gravity is a non-linear theory, there should be a point where gravitational waves start to behave non linearly. Here is my calculation, based on http://en.wikipedia.org/wiki/Gravitational_wave: There is a formula there for the total power radiated by a two body system: (1) P = 32/5*G^4/c^5*m^5/r^5 (for identical masses in orbit around each other) Further down the same wiki page I see a formula for h, which has a max absolute value of (assuming h+ and standing at R = 2r away from the system, theta = 0): (2) h = 1/2*G^2/c^4*2m^2/r^2 Things will be highly non linear at h = 1/2 (which is the value of h used in the diagram on the wikipedia page!). So lets set h = 1/2, and then substitute (2) into (1) to get the power as radiated by the whole system when h = 1/2 (use a lower value like h = 0.001 perhaps to be more reasonable, if you like). I am not trying to calculate where the chirp stops in a binary spin-down here, I’m looking for the maximum field strength of a gravitational wave. I get for the maximum power from a compact source (3) P = 64/5*c^3/4*m/r That’s the total power radiated when h is well into the non linear region – you will never get more than this power out of a system using gravitational radiation. The result depends on m/r , which makes sense as higher frequency waves with the same value of h carry more power. Putting the result in terms of orbital frequency, w, we get (using newtonian orbit dynamics (http://voyager.egglescliffe.org.uk/physics/gravitation/binary/binary.html) (4) Pmax = 16/5 c^3/G*w^2*r^2 That’s the max coming out of a region r across, we want watts per sq metre, so divide by the surface area of a sphere: (5) Pmax/per sq meter = 3/(5*pi)*c^3/G*w^2 The maximum power that you can deliver at 10^14 Hz (light wave frequencies, so as to compare to the E&M QED Schwinger limit) is 10^65 watts/m^2 ! That’s a lot of power, dwarfing the Schwinger limit. Is that about right? The max power scales as the square of the frequency, and is truly huge, reflecting how close to linear GR is over large parameter spaces. w = frequency So for gravitation, we have linear behavoir holds up until some fantastic power level: http://www.wolframalpha.com/input/?i=c%5E3%2FG*%28%285*10%5E14%29%5E2%29%2Fsec%5E2 1e65 watts per sq metre at visible light frequencies is about the linear limit for gravitational waves at a frequency of 10^14 . This means that gravity has ‘lots of headroom’ to create the phenomena of electromagnetism. Perhaps one could dream up a super efficient way to generate ‘normal’ quadrupole gravitational radiation using some radio sources arranged in some way. Or a way to generate anti-gravity, etc. GR certainly has a large enough range of linearity to power all of the EM we know today. Its also possible to generate monopole and spin 1 radiation from gravity, look up Brady’s papers on EM generation from simple compressible fluids, for instance. Also do the joules/sec per square mm or whatever calc. Also look at some other consequences in the dark recesses of the proton and electron (my models of them, or effects just based on size and field levels).  Would we start to get non-linear EM effects at what distance from the centre of an electron? Same for quarks? http://en.wikipedia.org/wiki/Gravitational_wave http://voyager.egglescliffe.org.uk/physics/gravitation/binary/binary.html Ref http://www.jetp.ac.ru/cgi-bin/dn/e_038_04_0652.pdf Lets look at an early universe model made entirely of classical General Relativity. Multiply connected, very lumpy, with energy across huge bandwidths. Lots of energy – some 10^80 nucleons worth, all in some region with small finite volume. How would this smooth itself out as time evolves? Are fundamental particles at their core an echo of the conditions at the big bang? In other words the density of energy in g/cm^3 of the core of an electron is perhaps the same energy density at which electrons were formed. #### Crazy thought: I think that electrons are much much smaller than quarks, and as such formed earlier in the big bang.  This was the start of inflation. The universe consisted of electrons + other chaotic GR mess. So we have incredible expansion as the electrons repel each other ferociously. Then as time passed, and the universe approached the meter size, quarks and nucleons organized to quench the repulsion. According to the standard model of inflation, (see below) that means that electrons are about 10^-77 m across while quarks are larger, more like 10^-27 meter.  (not sure I did the math right?) So inflation is a phenomenon of the creation of charge in the Universe. Reading a little on this – its at odds with the current theory (no doubt !) – in that the current theory has inflation coming when the strong nuclear force is separating out. But perhaps that’s another way to look at it – there are no forces other than random chaotic ones, and electrons give quarks a reason to be created – to soak up the energy of ( or  quench)  the inflation. Wikipedia the large potential energy of the inflaton field decays into particles and fills the Universe with Standard Model particles – electrons and quarks apply brakes to inflation as they condense. -cosmological constant is bound up spring like effect of noisy GR wave energy piled to the limit of curvature. Once we start to drop density, density drops faster and faster as GR is non linear, so there is less to keep it together. This is the origin of the cosmological constant, which powers inflation: Wikipedia This steady-state exponentially expanding spacetime is called a de Sitter space, and to sustain it there must be a cosmological constant, a vacuum energy proportional to $\Lambda$ everywhere. In this case, the equation of state is $\! p=-\rho$. The physical conditions from one moment to the next are stable: the rate of expansion, called the Hubble parameter, is nearly constant, and the scale factor of the Universe is proportional to $e^{Ht}$. Inflation is often called a period of accelerated expansion because the distance between two fixed observers is increasing exponentially (i.e. at an accelerating rate as they move apart), while $\Lambda$ can stay approximately constant (see deceleration parameter). The basic process of inflation consists of three steps: 1. Prior to the expansion period, the inflaton field was at a higher-energy state. 2. Random quantum fluctuations triggered a phase transition whereby the inflaton field released its potential energy as matter and radiation as it settled to its lowest-energy state. 3. This action generated a repulsive force that drove the portion of the Universe that is observable to us today to expand from approximately 10−50 metres in radius at 10−35 seconds to almost 1 metre in radius at 10−34 seconds. ## The Aether #### Einstein: We may say that according to the general theory of relativity space is endowed with physical qualities; in this sense, therefore, there exists an aether. According to the general theory of relativity space without aether is unthinkable; for in such space there not only would be no propagation of light, but also no possibility of existence for standards of space and time (measuring-rods and clocks), nor therefore any space-time intervals in the physical sense. But this aether may not be thought of as endowed with the quality characteristic of ponderable media, as consisting of parts which may be tracked through time. The idea of motion may not be applied to it. [1] Brady, in the paper “The irrotational motion of a compressible inviscid fluid” hypothesizes something different – that the universe is made of a non – relativistic compressible fluid, and that this fluid generates General Relativity. Einstein’s inertial medium behaves as a nonrelativistic barotropically compressible inviscid fluid.[2] Although my model of the electron and quantum effects is very similar to Brady’s, I diverge with him on the essence of the aether. I hypothesize that Brady and Einstein’s ether are the same thing, so that instead of Brady’s concept of generating GR from aether, we instead start with Classical General Relativity (with ‘no matter’, so the stress tensor T = 0), and then  create Sonons as solutions of GR. The aether is that of Einstein’s GR. ## Einstein’s Aether in Fluid Dynamics terms Einstein’s aether is inviscid – which means it has no viscosity (rocks travelling through empty space experience no drag…). Is it compressible? Certainly – this is what constructs such as black holes are. Is it irrotational? – that is a not a property that we need to determine, since without viscosity, an irrotational flow will stay that way. #### Truly Inviscid? No. GR is non-linear, which makes the inviscid property only an approximation – it’s a good approximation, though! Waves generated on an ocean or an oil puddle in a lab travel a limited distance, while the waves of GR can easily travel the universe. But they don’t travel ‘forever’. Consider now the construction of a Brady like sonon out of pure GR. We follow Brady’s paper until section 1.1, where he states: When an ordinary vortex is curved into a smoke ring, this force is balanced by Magnus forces (like the lift of an aircraft wing) as the structure moves forward through the fluid [10]. However a sonon cannot experience Magnus forces because it is irrotational, and consequently its radius will shrink, causing the amplitude A in (5) to grow due to the conservation of fluid energy. Nonlinear effects will halt the shrinking before A reaches about 1 since the density cannot become negative.[3] Intriguing. Look now at a completely classical general relativistic object – a spinning  Kerr solution. We have a tightly spinning GR object that can shrink no further.  Since we are trying to model an electron here, we use the standard black hole values (for an electron model this is a ‘naked’ a > m Kerr solution [6]) Brady’s sonons interact with the surrounding aether – how would that work in GR? We are after all taught that all GR objects like black holes have no hair. But of course they can have hair, its just that it will not last long. That’s the point here. Sonons can and will stop interacting if the background incoming waves die down below a certain point. But above a certain point black holes become perturbed, and things like ‘superradiance’  as Teukolsky and others discovered come into play. Indeed, as long as there are incoming waves, it seems that objects made of GR are highly reactive, and not boring at all.[4][5] So pure GR has at least the ability to interact in interesting ways, but are the numbers there? What frequencies do we need for Brady like Sonons constructed from GR (I’ll call them geons from now on) to get to the point where there are electromagnetic strength interactions are taking place? Bradys interactions occur with mass transfer – the compressible fluid carries away mass to and from each Sonon in a repeating manner. Not a problem for any GR ‘blob – geon’.  If they interact, then energy must be flowing in and out – that’s the definition of interaction. #### An Electron Model A previous post here – An Electron Model from Gravitational Pilot Waves  outlines the process. We take a small region of space (e.g.  containing a Kerr solution) and assume that this region of space is exchanging gravitational energy with its surroundings.  Call it an geon-electron. Assuming that the exchange takes place in a periodic fashion, the mass of this geon-electron (energy contained inside of the small region of space) is given as me(t) = me*((1 – f) + f*sin(vt)) where v is some frequency, and f is the proportion of mass that is varying, so f is from 0 –> 1. This varying mass will give rise to changes in the gravitational potential outside the region.  But gravitational effects do not depend on the potential, rather they depend on the rate of change of the potential over spacetime intervals.   So it’s not the potential from this tiny mass that is relevant, it is the time derivative of the potential that matters. Potential = -G*me(t)/r Look at the time derivative of the potential dP/dt = -G*me*f*v*cos(vt)/r This gradient is what one can think of as the force of gravity. This force rises linearly with the frequency of the mass oscillation. The EM force is some 10^40 times that of gravity, so we just need to use this factor to figure out an order of magnitude estimate of the frequency of this geon mass exchange rate. This is detailed in the ‘Coulomb Attraction’ section of an earlier post. Using de Broglie’s frequency – he considered the Compton value of 1.2356×1020 Hz as the rest frequency of the internal clock of the electron, one arrives at an electron model with these properties: • Entirely constructed from classical General Relativity • Frequency of mass exchange is the Compton frequency • Electromagnetic effects are a result of GR phenomenology • Quantum effects such as orbitals and energy levels are a natural result of these geons interacting with their own waves, so QM emerges as a phenomenon too. #### Einstein’s Vision: “I published the paper on the relativistic dynamics of the singular point indeed a long time ago. But the dynamical case still has not been taken care of correctly. I have now come to the point where I believe that results emerge here that deviate from the classical laws of motion. The method has also become clear and certain. If only I would calculate better! . . . It would be wonderful if the accustomed differential equations would lead to quantum mechanics; and I do not regard it as being at all out of the question” (Ref: Miller, 62 years of uncertainty) The State of Physics today ————————– Obviously a sea change in fundamental physics would be needed to allow for anything like these ideas to be considered. In fact its not that the ideas here might be correct – but rather that Brady and others who toil on actual progress in physics are sidelined by the current ‘complexity is king’ clique that is the physics community today. The physics community is more than it ever has been in the past, a tightly knit clique. This may be the fault of the internet and the lock in group think that instant communication can provide. This clique gives rise to ideas like ‘quantum mechanics is right‘ and other absurdities, such as the millions of hours spent on String Theory, when it’s ‘not even wrong‘. #### Tests and Simulations Given the entrenched frown on the subject of alternative bases for the underpinnings of our physical world, we need to look for experimental evidence to support these kinds of theories. The work of Yves Couder and his lab in one kind of essential experiment. They have shown conclusively that quantum like behaviour can emerge from classical systems. Another path – one that in my opinion has been somewhat neglected in this field is that of numerical techniques. Here I outline some steps that might be taken to construct a GR based model of an electron. Excuse the more colloquial manner, I am making notes for a future project here! #### Numerical Plans There are only about 22 Compton wavelengths within the Bohr radius. So if one goes to a 100 Compton wavelength simulation zone, with 1000 grid points on a side, thats 1e9 grid points, and each point needs only four 8 byte doubles, so 32 bytes, so 32 GB. The equations to solve on this simple grid are those of fluid dynamics: Compressible Isothermal Inviscid  Euler equations.  : As from I do like CFD. With a 32GB data set, 1e9 data points, and about 1000 computer FLOPs per visit, we have 1e12 FLOPs per time step, and an algorithm that gets 10GFlops, I get about a minute per time step.  Each time step needs to cover about 1/100th of the Compton time, or about 1e-22 secs, and we need to let light cross the atom (3e-19 secs) hundred times to get things to converge, or about 3e-17secs, so 300,000 time steps. (Better speed up the algorithm! Should be easy to get 20GFlops over 8 processors, and perhaps cut Flops/grid point down, which could mean a day or so on a 8 core Intel). #### Computer Model: Note on the Fine Structure Constant (useful in a numerical model) The quantity  was introduced into physics by A. Sommerfeld in 1916 and in the past has often been referred to as the Sommerfeld fine-structure constant. In order to explain the observed splitting or fine structure of the energy levels of the hydrogen atom, Sommerfeld extended the Bohr theory to include elliptical orbits and the relativistic dependence of mass on velocity. The quantity , which is equal to the ratio v1/c where v1 is the velocity of the electron in the first circular Bohr orbit and cis the speed of light in vacuum, appeared naturally in Sommerfeld’s analysis and determined the size of the splitting or fine-structure of the hydrogenic spectral lines. [*] #### Cosmic Censorship: Weak or strong, the cosmic censorship conjecture states that naked singularities can’t be seen, otherwise everything will break down, it would be really bad and worst of all theorists would be confused. Hawking and Ellis, in The LargeScale Structure of Space-Time (Cambridge 1973) But it turns out that singularities very likely don’t actually exist in a real universe governed by GR. Any lumpy, non symmetric space time can have all the spinning black holes it wants – at any angular momentum, even with   a > m (angular momentum greater than the mass in suitable units), as the Kerr solution + bumps (bumps are incoming GR full bandwidth noise), will have no paths leading to any singularity! So the curtain can be lifted, the horizon is not needed to protect us. #### Cosmic Serendipity Conjecture: In any sufficiently complex solution of GR, there exists no singularities. I am not talking about naked singularities here, I mean any and all singularities. The complex nature of the interaction of GR at the tiny scales where the singularity would start to form stop that very formation. In other words, the singularity fails to form as the infalling energy always has some angular momentum in a random direction, and ruins the formation of a singularity. In all likelihood actual physical spinning black holes in a turbulent environment (normal space) will have no singularity. I will let Brandon Carter speak now: “Thus we reach the conclusion that at timeline or null geodesic or orbit cannot reach the singularity under any circumstances except in the case where it is confined to the equator, cos() = 0…..Thus as symmetry is progressively reduced, starting from the Schwarchild solution, the extent of the class of geodesics reaching the singularity is steadily reduced likewise, … which suggests that after further reduction in symmetry, incomplete geodesics may cease to exist altogether” Not cosmic censorship, but almost the opposite – singularities can’t exist in an GR universe (one with bumps) because there are no paths to them. We have all been taught that singularities form quickly – that when a non – spherical mass is collapsing, GR quickly smooths the collapse, generating a singularity, neatly behind a horizon. Of course that notion is correct, but what it fails to take into account is that in a real situation, there is always more in falling energy, and that new infalling energy messes up the formation of the singularity. While there may be solutions to Einstein’s equations that show a singularity (naked or not), these solutions are unphysical, in that the real universe is bumpy and lumpy. So while the equations hold ‘far’ away from the singularity, the detailed Gravity in the high curvature region keeps it just that – high curvature as opposed to a singularity. The papers of A.Burinskii  come to mind, e.g.: Kerr Geometry as Space-Time Structure of the Dirac Electron #### Conclusion I am willing to bet that this conjecture is experimentally sound, in that there are no experiments that have been done to refute it. (that’s a joke I think). On the theory side, one would have to prove that a singularity is stable against perturbation by incoming energy, which from my viewpoint seems unlikely, as the forming singularity would have diverging fields and diverging response to incoming energy, which would blow it apart. Like waves in the ocean that converge on a rocky point. http://physics.stackexchange.com/questions/193340/does-general-relativity-entail-singularities-if-theres-a-positive-cosmological –Tom Koide and Compton The Koide formula is a remarkable equation relating the masses of the 3 leptons. When it was first written down, it did not in fact predict the mass of the tau to within experimental error. Turns out though that the experiments were wrong. A decade or two passed: it turns out that the Koide formula is extremely accurate. The Koide formula has been compared to Descartes theory of circles: One can see that the two relationships bear a resemblance. Jerzy Kocik, in his paper called “The Koide Lepton Mass Formula and Geometry of Circles” uses this correspondence to determine that the Koide formula looks like a generalization of Descartes Circle equation – with a characteristic angle of about 48 degrees. If one uses this formula, then the radius of the electron is actually the biggest, and tau smallest, (with a further particle having no or almost no mass…-  ν  ?). So are there any physical models that work well with the lightweight electron being large? The Koide Lepton Mass Formula and Geometry of Circles Koide – 2012 geometry paper – uses inverse mass as Descartes curvatures, so electron bigger than muon. Gravity vs. Quantum theory: Is electron really pointlike? Alexander Burinskii  – posits these same radii for the electron, muon and tau, using the Kerr Neumann formula r = J/m = hbar/2m. Note that I would use only the Kerr formula (same answer for large a) Implies huge electron, but as Burinski points out, this might not be the size we see when accelerated, etc. So if the Koide formula is real, then it describes some relationship between the areas (using the geometry paper) where they overlap at some 48 degree angle (look at diagrams). The naked kerr solution describes a wormhole like situation, so we could get the mass oscillation that I am looking for. Also – is a kerr solution with a so high really a naked singularity. The ring would look like a straight line (use cylindrical coordinates) – like a line of sharwshild solutions moving in space – would this make an horizon again? (I am thinking of a tubular horizon…) http://arxiv.org/pdf/astro-ph/0701006.pdf 1112.0225.pdf (burininski) Why not emergent QED? My thesis is that electro magnetic effects along with quantum behaviour emerge from large amplitude GR monopole wave interaction in the high memory regime. So its basically a recipe for QED. What is the biggest problem in the accepted QED? The renormalization problem. So lets look at how to solve it with my emergent sonon like gravity system. “De Broglie’s law of motion for particles is very simple. At any time, the momentum is perpendicular to the wave crests (or lines of constant phase), and is proportionally larger if the wave crests are closer together. Mathematically, the momentum of a particle is given by the gradient (with respect to that particle’s co-ordinates) of the phase of the total wavefunction. This is a law of motion for velocity, quite unlike Newton’s law of motion for acceleration. “ – Antony Valentini, Beyond the Quantum So are the GR constructs that I espouse in these posts able to naturally create such an effect? We have monopole waves…. I start with a screen grab from the video below. Yves Couder and friends are clearly looking at hidden variable theories: Here is a 3 minute movie with the above slide: # The pilot-wave dynamics of walking droplets Here is a paper about eigenstates, etc… Self-organization into quantized eigenstates of a classical wave driven particle  (Stéphane Perrard1, Matthieu Labousse, Marc Miskin, Emmanuel Fort, and Yves Couder). Compare that with my hastily written post. Yves Couder . Explains Wave/Particle Duality via Silicon Drop “Couder could not believe what he was seeing”. Here it was sort of a eureka moment at home on a Sunday afternoon. Here is a link to the whole show.(45 mins) ## Valentini: Valentini (along with me) thinks that QM is wrong, in that its not the ‘final layer’. His de Broglie arguments are powerful and hit close to home for me. I have read most of David Bohm’s papers and books since discovering him as a 4th year undergrad back in the 80s. Bohm’s ideas launched mine. Note that much of physics is built on the assumption that with QM somehow ‘this time its different’ – that any future theory will need to be QM compliant or it is wrong. As if QM was somehow as certain as the (mathematical and hence solid) 2nd Law or something. This leaves no room for argument or dissent. Perfect conditions for a paradigm change! http://www.perimeterinstitute.ca/search/node/valentini EG: This is the presentation that outlines things as he sees them. I see things that way too, although I am of the opinion that the pilot waves are GR ripples. http://streamer.perimeterinstitute.ca/Flash/3f521d41-f0a9-4e47-a8c7-e1fd3a4c63c8/viewer.html Not even wrong. Why does nobody like pilot-wave theory? “De Broglie’s law of motion for particles is very simple. At any time, the momentum is perpendicular to the wave crests (or lines of constant phase), and is proportionally larger if the wave crests are closer together. Mathematically, the momentum of a particle is given by the gradient (with respect to that particle’s co-ordinates) of the phase of the total wavefunction. This is a law of motion for velocity, quite unlike Newton’s law of motion for acceleration. “ Antony Valentini, Beyond the Quantum If QM runs as wiggles in GR, we have a possible way to get collapse, and have a linear QM theory that breaks down over long times or with too many signals in one place. In other words: Each QM state vector is represented NOT only as a vector in a Hibert Space, but are really ‘real’  arrangements of (usually small scale) GR waves. Since GR waves behave linearly over a large range of frequencies and amplitudes, these waves do not interact, and can be represented well as they are now in QM – by a Hilbert Space. Collapse occurs when this linearity is compromised. Thus there is a limit to entanglement and Quantum computing. The collapse of the wave function is a physical happening independent of observers. It occurs when these waves self – interact. Indeed – with a theory where the QM states can only interact in a linear fashion, we have absurdities such as infinite computing power combined with massive Hilbert Spaces. This should be quantifiable. In other words the collapse can be simulated on a computer system without Bohr like handwaving or the Many World’s trillions of universes per second per cubic cm coming into existence to avoid a true collapse (ok I know its more than trillions per second…). To estimate the conditions for collapse: Take the likely amplitude of a single quantum wave (by looking at this mass – difference theory that I have for instance) and then see how many can pile into the same place before non-linear interference occurs – which would start a collapse. So collapse occurs when a simple isolated system interferes with a system with many more moving parts – an observation. Entanglement/EPR/Bell outside the light cone is handled by non-local topology “worm – holes” in GR. -Tom
{}
# How do you solve 1/2x-9<2x? Jan 14, 2017 See entire solution process below: #### Explanation: First, multiply each side of the inequality by $\textcolor{red}{2}$ to eliminate the fraction and keep the inequality balanced: $\textcolor{red}{2} \times \left(\frac{1}{2} x - 9\right) < \textcolor{red}{2} \times 2 x$ $\left(\textcolor{red}{2} \times \frac{1}{2} x\right) - \left(\textcolor{red}{2} \times 9\right) < 4 x$ $x - 18 < 4 x$ Next, we subtract $\textcolor{red}{x}$ from each side of the equation to isolate the $x$ terms on one side of the inequality and the constants on the other side of the inequality while keeping the inequality balanced: $x - \textcolor{red}{x} - 18 < 4 x - \textcolor{red}{x}$ $0 - 18 < \left(4 - 1\right) x$ $- 18 < 3 x$ Now we can divide each side of the inequality by $\textcolor{red}{3}$ to solve for $x$ and keep the inequality balanced: $- \frac{18}{\textcolor{red}{3}} < \frac{3 x}{\textcolor{red}{3}}$ $- 6 < \frac{\textcolor{red}{\cancel{\textcolor{b l a c k}{3}}} x}{\cancel{\textcolor{red}{3}}}$ $- 6 < x$ Then, we can reverse or "flip" the inequality to solve in terms of $x$: $x > - 6$
{}
# Biaxial Bending Sign convention I did a couple of quick calcs to better understand the phenomenon of biaxial bending. One thing that is currently tripping me up is the sign convention. For example, if we have a rectangular beam as follows with $b = 15''$ and $h = 30''$ and a coordinate system established as show with origin at exactly the middle. $I_x = 33750\text{ in}^2$ and $I_y = 8437.5\text{ in}^2$. Furthermore we have a moment $M_x = 100\text{ lb.in}$ occuring about the +x axis and a moment $M_y = 100\text{ lb.in}$ occuring about the +y axis. Intuitively, we can tell $M_x$ is causing compression at the bottom and tension at the top of this section. Likewise we can also say $M_y$ is causing compression at the right and tension at the left of the section. However, when we run the numbers: \begin{alignat}{2} \sigma_{top} &= \frac{M_xy}{I_x} = \frac{(100\text{ lb.in})*(15\text{ in})}{33750\text{ in}^4} &&= 0.044\text{ psi} \\ \sigma_{bot} &= \frac{M_xy}{I_x} = \frac{(100\text{ lb.in})*(-15\text{ in})}{33750\text{ in}^4} &&= -0.044\text{ psi} \\ \sigma_{right} &= \frac{M_yx}{I_y} = \frac{(100\text{ lb.in})*(7.5\text{ in})}{8437.5\text{ in}^4} &&= 0.088\text{ psi} \\ \sigma_{left} &= \frac{M_yx}{I_y} = \frac{(100\text{ lb.in})*(-7.5\text{ in})}{8437.5\text{ in}^4} &&= -0.088\text{ psi} \\ \end{alignat} It is highly confusing to me that $\sigma_{bot}$ and $\sigma_{right}$, which are both in compression, have opposing signs. Equally confusing is the fact that $\sigma_{top}$ and $\sigma_{left}$, which are both in tension, have opposing signs. Can anyone bring light to this basic mechanics of materials issue? • Your "numbers" assume a sign convention. Where do these numbers (or the formulas beyond them) come from? – Pere Aug 30 '16 at 20:07 • the numbers are given from the problem. The formulas are your regular Euler-Bernoulli bending stress equation from mechanics of materials. See under "Euler-Bernoulli" bending theory in en.wikipedia.org/wiki/Bending – user32882 Aug 30 '16 at 20:15 • As far as I can see, the source you pointed just shows ${\sigma}= \frac{M y}{I_x}$. For the other equation, where does it come from that it is ${\sigma}= \frac{M x}{I_y}$ instead of ${\sigma}= -\frac{M x}{I_y}$, when you use the sign convention for X and Y you show in your drawing? – Pere Aug 30 '16 at 21:55 Alternatively, you could take the absolute value of $\frac{My}{I}$ then add the sign afterwards in order to ensure that tensile stresses are positive and compressive stresses are negative (or vice versa as long as you are consistent).
{}
E. Determine the photon energy (in electron volts) of the second line in the Balmer series. Favorite Answer. Calculate the wavelength of first and limiting lines in Balmer series. Nov 07,2020 - The wavelength of the first spectral line in the Balmer series of hydrogen atom is 6561 Å. :) If your not sure how to do it all the way, at least get it going please. A little help with AP Chemistry? …, of the reaction? 4 Answers. As wavelength is cannot be negative. The Balmer series in a hydrogen atom relates the possible electron transitions down to the n = 2 position to the wavelength of the emission that scientists observe. Wavelength of photon emitted due to transition in H-atom λ 1 = R (n 1 2 1 − n 2 2 1 ) Shortest wavelength is emitted in Balmer series if the transition of electron takes place from n 2 = ∞ to n 1 = 2 . Problem 18 Medium Difficulty (a) Which line in the Balmer series is the first one in the UV part of the spectrum? The straight lines originating on the n =3, 4, and 5 orbits and terminating on the n = 2 orbit represent transitions in the Balmer series. The first line of the Balmer series occurs at a wavelength of 656.3 nm. Express your answer using five significant figures. D. In what part of the electromagnetic spectrum do this line appear? Information given The first line of the Balmer series occurs at a wavelength of 656.3 nm. The visible light spectrum for the Balmer Series appears as spectral lines at 410, 434, 486, and 656 nm. When n = 3, Balmer’s formula gives λ = 656.21 nanometres (1 nanometre = 10 −9 metre), the wavelength of the line designated H α, the first member of the series (in the red region of the spectrum), and when n = ∞, λ = 4/ R, the series limit (in the ultraviolet). Balmer series is the spectral series emitted when electron jumps from a higher orbital to orbital of unipositive hydrogen like-species. The individual lines in the Balmer series are given the names Alpha, Beta, Gamma, and Delta, and each corresponds to a ni value of 3, 4, 5, and 6 respectively. 75E: Let ?X ?have a Weibull distribution with the pdf from Expression (4... Ronald E. Walpole; Raymond H. Myers; Sharon L. Myers; Key... Probability and Statistics for Engineers and the Scientists, Chapter 4: Introductory Chemistry | 5th Edition, Chapter 5: Introductory Chemistry | 5th Edition, Chapter 14: Conceptual Physics | 12th Edition, Chapter 16: Conceptual Physics | 12th Edition, Chapter 35: Conceptual Physics | 12th Edition, Chapter 2.2: Discrete Mathematics and Its Applications | 7th Edition, Discrete Mathematics and Its Applications, 2901 Step-by-step solutions solved by professors and subject experts, Get 24/7 help from StudySoup virtual teaching assistants. Table 1. The transitions are named sequentially by Greek letter: n = 3 to n = 2 is called H-α, 4 to 2 is H-β, 5 to 2 is H-γ, and 6 to 2 is H-δ. 1 answer. The wavelength of first line of Balmer series in hydrogen spectrum is 6563 Angstroms. A) 304 nm B) 30.4 nm C) 329 nm D) 535 nm E) 434 nm Answer: E Diff: 2 Type: BI Var: 1 Reference: Section 8-3 78) Calculate the wavelength, in nm, of the first line of the Balmer series. L=4861 = For 3-->2 transition =6562 A⁰ These lines are emitted when the electron in the hydrogen atom transitions from the n = 3 or greater orbital down to the n = 2 orbital. What is the energy difference between the two energy levels involved in the emission that results in this spectral line? 1 decade ago. person. The Balmer series is characterized by the electron transitioning from n ≥ 3 to n = 2, where n refers to the radial quantum number or principal quantum number of the electron. Related Questions: The wavelength of first line of Lyman series will be : If the wavelength of the first line of the Balmer series of hydrogen is 6561 Å, the wavelength of the second line of the series should be (A) 13122 The Balmer series of atomic hydrogen. The wavelength of the last line in the Balmer series of hydrogen spectrum. What is the energy difference between the two energy levels involved in the emission that results in this spectral line? Answer Save. The velocity of the electron ejected from a silver surface by ultraviolet light of wavelength 2536 × 10 –10 m is ... Young’s double slit experiment is first performed in air and then in a medium other than air. Solution: For maximum wavelength in the Balmer series, n 2 = 3 and n 1 = 2. For example, there are six named series of spectral lines for hydrogen, one of which is the Balmer Series. Explanation: No explanation available. The first line of the Balmer series occurs at a wavelength of 656.3 \\mathrm{nm} . Be the first to write the explanation for this question by commenting below. The wavelength of line is Balmer series is 6563 Å. Compute the wavelength of line of Balmer series. 7%. Explanation: The Balmer series corresponds to all electron transitions from a higher energy level to n = 2. eilat.sci.brooklyn.cuny.edu. What is the energy difference between the two energy levels involved in the e… Pls. The wavelengths of these lines are given by 1/λ = R H (1/4 − 1/n 2), where λ is the wavelength, R H is the Rydberg constant, and n is the level of the original orbital. B. Wavelengths of these lines are given in Table 1. question_answer Answers(1) edit Answer . Al P. Lv 7. The wavelength of the first line in the Balmer series of hydrogen spectrum. Relevance. (b) How many Balmer series lines are in the visible part of the spectrum? The wavelength is given by the Rydberg formula. 1QP: Define these terms: potential energy, kinetic energy, law of conser... 5QP: Determine the kinetic energy of (a) a 7.5-kg mass moving at 7.9 m/s... 4QP: Describe the interconversions of forms of energy occurring in these... 3QP: A track initially traveling at 60 km/h is brought to a complete sto... 9CRE: CRE Congress and Religion. 154AP: In the beginning of the twentieth century, some scientists thought ... 2QP: What are the units for energy commonly employed in chemistry? asked Jun 24, 2019 in NEET by r.divya (25 points) class-11; 0 votes. Swathi Ambati. Correct Answer: 1215.4Å. Please explain your work. ?Based on data from the Pew Forum on Rel... 27E: What are the possible values of the principal quantum number n ? Biology 105 Professor: Brigitte Blackman CRN: 43667 MWF Lecture 9:00-9:50 First Priscilla L. Exploring Life and Science What Is Biology ● Biology is the scientific study of life. Balmer Series – Some Wavelengths in the Visible Spectrum. The wavelength of first line of Balmer series is 6563 ∘A . The wavelengthof the second spectral line in the Balmer series of singly-ionized helium atom isa)1215 Åb)1640 Åc)2430 Åd)4687 ÅCorrect answer is option 'A'. The first order reaction requires 8.96 months for the concentration of reactantto be reduced to 25.0% of its original value. The wavelength of the last line in the Balmer series of hydrogen spectrum. what electronic transition in the He+ ion would emit the radiation of the same wavelength as that of the first line in laymen series of hydrogen. The wavelength of H_ (alpha) line of Balmer series is 6500 Å. The wavelength of first line of Lyman series will be . The first line of the Balmer series occurs at a wavelength of 656.3 nm. The wavelength of first line of Balmer series is 6563Å. The Balmer Series of spectral lines occurs when electrons transition from an energy level higher than n = 3 back down to n = 2. let λ be represented by L. Using the following relation for wavelength; For 4-->2 transition. "The first line of the Balmer series occurs at a wavelength of 656.3 nm. the first line of balmer series of he ion has a wavelength of 164 nm the wavelength of the series limit is - Chemistry - TopperLearning.com | crc8ue00 The wavelength of the last line in the Balmer series of hydrogen spectrum is 364 nm. Wh... 6E: Describe the geometry and hybridization about a carbon atom that fo... 45PE: A dolphin in an aquatic show jumps straight up out of the water at ... 14E: Estimations with linear approximation ?Use linear approximations to... William L. Briggs, Lyle Cochran, Bernard Gillett. Enter your email below to unlock your verified solution to: The first line of the Balmer series occurs at a wavelength, Chemistry: Atoms First - 1 Edition - Chapter 3 - Problem 47qp. If the transitions terminate instead on the n =1 orbit, the energy differences are greater and the radiations fall in the ultraviolet part of the spectrum. (a) 4.48 months(c) 8.96 months(b) 2.24 months(d) 17.9 months​, cell is basic unit of life discus in brief​, an element contains 5 electron in its valence shell this is element is an major component of air ∴ 1 λ = 1.09 × 10 7 × 1 2 ( 1 2 2 − 1 3 2) ⇒ 1 λ = 1.09 × 10 7 × 1 ( 1 4 − 1 9) = 1.09 × 10 7 × 1 ( 5 36) ⇒ λ = 1.09 × 10 7 × 1 ( 5 36) = 6.60 × 10 − 7 m = 660 nm. As wavelength is … The wavelength of the first line in the Balmer series of hydrogen spectrum. Determine the wavelength, in nanometers, of the line in the Balmer series corresponding to #n_2# = 5? What is the half-life where. C. Determine the wavelength of the first line in the Balmer series. This site is using cookies under cookie policy. The ratio of wavelengths of the last line of Balmer series and the last line of Lyman series … Thank you! Name of Line nf ni Symbol Wavelength Balmer Alpha 2 3 Hα 656.28 nm ¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯∣∣ ∣ ∣ a a 1 λ = R( 1 n2 1 − 1 n2 2) a a∣∣ ∣ ∣ −−−−−−−−−−−−−−−−−−−−−−−. line indicates transition from 4 --> 2. line indicates transition from 3 -->2. γ line of Balmer series p = 2 and n = 5 the longest line of Balmer series p = 2 and n = 3 the shortest line of Balmer series p = 2 and n = ∞ Balmer transitions from. The constant for Balmer's equation is 3.2881 × 10 15 s-1. As En = - 13.6n3 eVAt ground level (n = 1), E1 = - 13.612 = - 13.6 eVAt first excited state (n= 2), E2 = - 13.622 = - 3.4 eVAs hv = E2 - E1 = - 3.4 + 13.6 = 10.2 eV = 1.6 × 10-19 × 10.2 = 1.63 × 10-18 JAlso, c = vλSo λ = cv = chE2 - E1 = (3 x 108) x (6.63 x 10-34)1.63 x 10-18 = 1.22 × 10-7 m ≈ 122 nm The constant for Balmer's equation is 3.2881 × 10 15 s-1. thumb_up Like (1) visibility Views (31.3K) edit Answer . 434 nm. The wavelength of the first line in the Balmer series of hydrogen spectrum is 656 nm. Determine the frequency of the first line in the Balmer series. This set of spectral lines is called the Lyman series. Options (a) 1215.4Å (b) 2500Å (c) 7500Å (d) 600Å. What is the energy difference between the two energy levels involved in the emission that results in this spectral line? The wavelength of the first line in the Balmer series of hydrogen spectrum is 656 nm. The first member of the Balmer series of hydrogen atom has a wavelength of 6561 Å. Calculate the wavelengths of the first three lines in the Balmer series for hydrogen. …, (a) identify the element (b) show the bond formation and name the bond​, sate any 4 properties in which covalent compounds differ from ionic compounds​, o find the number of a Carbon, B Consonady carbon as well as theirnesperdive Hydrogen In the followingCompoundsBothL:H2.1स्ट्रक्चर ​, example of reduction reactionI am mentioning that please do not give example of REDOX reaction.​, defin letraltissue ..?give me right answer☺️​, defin parenchyma tissue ..?give me right answer☺️​. Wavelength of Alpha line of Balmer series is 6500 angstrom The wavelength of gamma line is for hydrogen atom - Physics - TopperLearning.com | 5byyk188 In quantum physics, when electrons transition between different energy levels around the atom (described by the principal quantum number, n ) they either release or absorb a photon. You can specify conditions of storing and accessing cookies in your browser, Calculate the wavelength of the first and last line in the balmer series of hydrogen spectrum, 11. how_to_reg Follow . First line of Balmer series means 3 ... Related questions 0 votes. As the first spectral lines associated with this series are located in the visible part of the electromagnetic spectrum, these lines are historically referred to as "H-alpha", "H-beta", "H-gamma", and so on, where H … Be represented by L. Using the following relation for wavelength ; for 4 -- > 2 transitions from a energy. ( 31.3K ) edit Answer at 410, 434, 486, and 656 nm 2 = and! Solution: for maximum wavelength in the emission that results in this spectral line 25.0 % of its value! ) 2500Å ( c ) 7500Å ( d ) 600Å ) visibility Views 31.3K... By r.divya ( 25 points ) class-11 ; 0 votes a higher energy level to n = eilat.sci.brooklyn.cuny.edu! Spectral lines at 410, 434, 486, and 656 nm ; 0 votes the second line the. Levels involved in the emission that results in this spectral line H_ alpha. How to do it all the way, at least get it going please first three lines in series... Be represented by L. Using the following relation for wavelength ; for 4 >! Line in the Balmer series of hydrogen spectrum the possible values of the first reaction. ( a ) Which line in the UV part of the Balmer series are... First and limiting lines in the visible spectrum r.divya ( 25 points ) ;. ) line of the last line in the Balmer series is 6563 Angstroms ( 1 visibility. H_ ( alpha ) line of the Balmer series is the first line in visible... The photon energy ( in electron volts ) of the spectrum Which line the. The spectrum of spectral lines is called the Lyman series will be ) 600Å Balmer equation! Its original value has a wavelength of the first line in the Balmer.! First member of the principal quantum number n series lines are in the emission that results in this spectral?...? Based on data from the Pew Forum on Rel... 27E: are. The Balmer series lines are given in Table 1 this line appear of. Is Balmer series – Some wavelengths in the emission that results in this spectral?. The wavelength of the Balmer series 486, and 656 nm the way, least... The concentration of reactantto be reduced to 25.0 % of its original value – Some in! 410, 434, 486, and 656 nm Like ( 1 ) Views... Based on data from the Pew Forum on Rel... 27E: what the... Spectrum for the Balmer series – Some wavelengths in the Balmer series of hydrogen spectrum d. in what of! Wavelength, in nanometers, of the reaction 1 − 1 n2 2 ) a a∣∣ ∣ −−−−−−−−−−−−−−−−−−−−−−−... The Balmer series corresponding to # n_2 # = 5 ) how Balmer. Series corresponding to # n_2 # = 5 a∣∣ ∣ ∣ a a 1 λ R. ( alpha ) line of Balmer series occurs at a wavelength of first line in the part..., at least get it going please series occurs at a wavelength first... Reactantto be reduced to 25.0 % of its original value energy difference between two! Corresponds to all electron transitions from a higher energy level to n = 2. eilat.sci.brooklyn.cuny.edu first... Forum on Rel... 27E: what are the possible values of the first line in the Balmer.... A 1 λ = R ( 1 ) visibility Views ( 31.3K ) edit wavelength of first line of balmer series this spectral line 3.2881 10. Spectrum for the concentration of reactantto be reduced to 25.0 % of its original value spectral lines at,. A a∣∣ ∣ ∣ a a 1 λ = R ( 1 ) visibility (! Spectral line transition from 3 -- > 2 transition indicates transition from 3 -- 2! Balmer 's equation is 3.2881 × 10 15 s-1 8.96 months for the Balmer series series lines are given Table! ∣ −−−−−−−−−−−−−−−−−−−−−−− order reaction requires 8.96 months for the concentration of reactantto be reduced to %... Reduced to 25.0 % of its original value by L. Using the following for! 6500 Å asked Jun 24, 2019 in NEET by r.divya 25... This question by commenting below ) 600Å Lyman series will be called the Lyman series will be requires 8.96 for... Following relation for wavelength ; for 4 -- > 2 values of the last line in Balmer. Be the first line in the visible light spectrum for the Balmer series of hydrogen spectrum ( d 600Å. Be represented by L. Using the following relation for wavelength ; for 4 -- > 2 transition is 6500... Has a wavelength of the Balmer series in hydrogen spectrum is 656 nm n 2 = wavelength of first line of balmer series and 1... Visible spectrum from 4 -- > 2 transition − 1 n2 1 − 1 n2 2 a! Do it all the way, at least get it going please to # n_2 # = 5 1. The second line in the Balmer series, n 2 = 3 and 1! Calculate the wavelengths of these lines are in the Balmer series is the line... Called the Lyman series volts ) of the Balmer series occurs at a wavelength of 6561 Å lines... First to write the explanation for this question by commenting below ) how many Balmer series is 6563 Compute... 2 = 3 and n 1 = 2 = 5 solution: for wavelength. Higher energy level to n = 2. eilat.sci.brooklyn.cuny.edu the possible values of the spectrum what are the values. × 10 15 s-1 … the first line in the emission wavelength of first line of balmer series in! How many Balmer series is the half-life …, of the Balmer series is Angstroms. 6500 Å λ be represented by L. Using the following relation wavelength... Many Balmer series of wavelength of first line of balmer series spectrum a higher energy level to n = 2. eilat.sci.brooklyn.cuny.edu that results in spectral. Electron volts ) of the Balmer series corresponding to # n_2 # = 5 order requires... Lines is called the Lyman series will be is 656 nm is called the Lyman series the electromagnetic spectrum this!... 27E: what are the possible values of the Balmer series hydrogen. Between the two energy levels involved in the Balmer series in hydrogen spectrum 6563... By commenting below spectrum do this line appear series corresponds to all electron transitions from a higher energy to. Wavelength ; for 4 -- > 2. line indicates transition from 4 -- > 2. line indicates transition from --... Number n appears as spectral lines is called the Lyman series will be level n... Of Lyman series will be the last line in the emission that results in this spectral line − n2. Series occurs at a wavelength of the first line of Lyman series the Lyman series ) 7500Å ( ). How to do it all the way, at least get it going please % of its value... Results in this spectral line Difficulty ( a ) 1215.4Å ( b ) how Balmer. Commenting below a 1 λ = R ( 1 ) visibility Views ( 31.3K edit! # = 5 a∣∣ ∣ ∣ −−−−−−−−−−−−−−−−−−−−−−− the Balmer series lines are given in 1! 2 ) a a∣∣ ∣ ∣ −−−−−−−−−−−−−−−−−−−−−−− Table 1 second line in the visible light spectrum the!: for wavelength of first line of balmer series wavelength in the Balmer series, n 2 = 3 and n 1 = 2 asked 24... Energy levels involved in the visible spectrum the spectrum following relation for wavelength ; 4... Do it wavelength of first line of balmer series the way, at least get it going please 1 n2 2 a! Wavelength, in nanometers, of the Balmer series is 6563Å reactantto be to... 2 transition light spectrum for the concentration of reactantto be reduced to 25.0 % of its original value from --! Å series appears as spectral lines is called the Lyman series be. 'S equation is 3.2881 × 10 15 s-1 spectral lines is called the Lyman series corresponds to all transitions. All electron transitions from a higher energy level to n = 2..! 15 s-1 ∣ −−−−−−−−−−−−−−−−−−−−−−− the Lyman series ( c ) 7500Å ( d ) 600Å spectrum the! By L. Using the following relation for wavelength ; for 4 -- > 2 three lines in Balmer series 656... × 10 15 s-1 2500Å ( c ) 7500Å ( d ) 600Å wavelength of first line of balmer series 25.0 of. This spectral line class-11 ; 0 votes 8.96 months for the concentration of reactantto be reduced 25.0... Requires 8.96 months for the Balmer series corresponds to all electron transitions from a higher energy level to n 2.... The Balmer series in hydrogen wavelength of first line of balmer series the first line of Lyman series be... Asked Jun 24, 2019 in NEET by r.divya ( 25 points class-11! Going please 25 points ) class-11 ; 0 votes 27E: what are the possible values of the first of... L. Using the following relation for wavelength ; for 4 -- > 2 transition this! 1 = 2 be reduced to 25.0 % of its original value Based on data the. Do it all the way, at least get it going please quantum number n be the first line the! ; for 4 -- > 2 transition n2 2 ) a a∣∣ ∣. R.Divya ( 25 points ) class-11 ; 0 votes 2500Å ( c 7500Å. Get it going please from 3 -- > 2. line indicates transition from --., 434, 486, and 656 nm H_ ( alpha ) line of Lyman wavelength of first line of balmer series will be ). Visibility Views ( 31.3K ) edit Answer 24, 2019 in NEET by r.divya 25... ; for 4 -- > 2. line indicates transition from 3 -- > 2. line indicates transition from 4 >. 25 points ) class-11 ; 0 votes to 25.0 % of its original value will be 1 = 2 1. Is Balmer series lines are given in Table 1 level to n = 2. eilat.sci.brooklyn.cuny.edu to...
{}
F is for midterm We’re a little past midterm and I wanted to give an update on my optics course where I’m trying and SBG portfolio approach. A quick refresher: • Every day is a different standard • “I can explain what plane waves are” • Each day I assign 3 rich problems (some from the book, some I make up) • Each day has a quiz on a random problem from the last 2 days • For the oral exams students bring in their portfolio of problems, I randomly select one and ask follow up questions on it. Midterm grades weren’t great. The most common grade was an F. I feel like crap about that. I just wanted to write about what’s been going on to help me reflect. First the good news: I like the structure. The three problems every day help me really flesh out what I think is important and provide focus for what we do in class. I like a lot of the book problems but it’s fun to make up my own at times to (I really did use the one about 3D movie glasses that I talked about in the other post). Students come to the oral exams with their portfolios and some have some really great work done on them. So why so many F’s? Those of you who’ve dabbled with standards-based grading know where they come from: “I can always reassess later.” While I thought knowing that a quiz was upcoming would motivate the students to take an honest stab at the problems between each class, quite often it seems that few have spent much time on them before the quiz. They know they can bomb the quiz and still reassess later. It makes for some pretty depressing quiz scores. Combine that with little pressure to reassess early and you get a bunch of F’s for midterm. The first set of oral exams (each student does three in a week) was very depressing as well. The most common grade was a zero, which they got if they didn’t have anything in their portfolio for the random problem selected. I made it clear they’d get an immediate zero but that we’d spend the time making sure they knew how to get started on the problem. I just finished the second week of oral exams (separated from the first by four weeks) and saw many less zeros. I would ask what the chances of a zero were and very few said “zero chance, I’ve got something for every one.” With one student I joked that he was treating the oral exams like a casino. One student only had one he hadn’t done. That’s the number that came up😦 I talked with many of the students who got F’s and asked if they had a plan. Most had a lot of confidence that they’d pass the course but they realized they needed to start turning in reassessments much more often. While that’s great news, I also hope they start looking at the problems earlier so that the quizzes can be good enough scores to keep them from having to reassess every standard. I asked a lot of them if they were mad at me because of the F’s and no one admitted to that. Most said it was an honest assessment of their turned in work while from several I got the sense that they felt it was a far cry from their internal understanding of the material. I know from my colleagues’ experience that most of these students will work hard if you give them a hard deadline. My only deadline is the two-week rule that says you have to get in at least a piece of crap for every standard within two weeks of it being activated (talked about in class) or else it’s a zero forever. Most standards have a quiz associated that takes care of that, but the randomness means there’s the occasional standard that doesn’t get quizzed. That’s still a pretty weak deadline compared to my colleagues’ teaching approaches. My dreamer response is that this is a lesson they should learn, but I don’t feel I’m being very successful attaining that goal. Labs is another place where I’ve realized I have to provide a different style of support. Most labs involve up to an hour of planning, roughly an hour of data collection, and an hour devoted to analysis. What happens in practice often is an hour of planning, an hour of data collection, and everyone leaves. They know that they’ll have 2 weeks to get something in so why would they have to work on the analysis then? I think a few of the students have come to realize that I can be very useful to them during the analysis stage, but if they don’t stick around they’ll have to track me down later. One big mistake I made was to trust them to do the heavy lifting involved in getting up the Mathematica syntax learning curve to do the types of analysis I want (Montecarlo-based error propagation, curve fitting that’s responsive to variable error bars and that produces error estimates on all the fit parameters). Last week when I turned in the midterm grades I sat down and made much better support documents in Mathematica that will help them focus on the physics that needs to be studied in the lab. That’s already paid off quite nicely for a couple of students. Well, that’s where I sit. I’m a little nervous that I’ve lost the students, though I was heartened by some good conversations with each of them this week. I think the final grades will be much better than the midterms but I’m nervous that their memory of the class will be dominated by the last few weeks of the semester when a bunch of them will be making screencasts 24 hours a day. We’ll see. Your thoughts? Here are some starters for you: • I’m in this class and I gave up weeks ago. What would have really helped was . . . • I’m in this class and I see a clear path to success. Here’s how I’m going to do it . . . • Why do you put an apostrophe in “F’s”? It’s not possessive is it? • Why don’t you put more teeth into your quizzes? Here’s how I would do it . . . • Can’t you see that SBG just isn’t the way to go with this class? I can’t believe it’s taking you so long to figure that out. • If the students end up hating the class but learn the lesson about keeping up on their work that’s a win for me. • If you think that students hating a class could possibly be spun as a positive you’re a worse teacher than I thought you were. • Why do you do Montecarlo-based error propagation? It’s clearly getting them into a casino mentality that now you’re wasting our time complaining about. Posted in syllabus creation, teaching | 4 Comments Optimal race path I ride my bike to work so I’m often thinking about the best path to take around corners. I know bike racers and car racers (and bob sledders) are often told to head into a corner wide, then cut the apex, and then exit wide again. Basically the gist is that you want to make your actual path have the largest turn radius possible so that you don’t slip out. The question I was thinking about recently was whether there was some compromise since typically the largest radius path (which allows the largest speed without slipping out) is also the longest path (and hence mitigates a little the fact that it’s the faster path). I also realized that in car racing, and to a limited degree bike racing, the speed is not held constant throughout the path, so I wondered how you could find the optimal path and the optimal speed adjustments throughout. That’s what this post is about. First a quick story about go-karts. I was “racing” in one (against my friends) and I was trying to follow the wide/narrow/wide path through all the corners. But I was losing! I finally realized that the wheels had terrific grip and that I could floor the pedal and hug all the curves and never spin out. My friends knew this and by the time I figured it out it was too late. So what’s the physics involved here? The key is to figure out why wheels start to slip in the sideways direction. They have a particular amount of grip and that force provides the instantaneous centripetal acceleration for the wheel. If you know what the grip force is, along with the instantaneous radius of curvature, you can find the fastest possible speed at that section of the road: $F_\text{grip}=\frac{m v^2}{R}$ or $v_\text{max}=\sqrt{\frac{F_\text{grip} R}{m}}$ So, if you know the path of the road, you should be able to figure out the maximum possible speed at every location. So how do you do that? Well, first let’s make sure we understand how we’re mathematically describing the path. What I decided to do was just pick some random points in the plane. Then I interpolate a path that smoothly connects them all. Here’s the Mathematica syntax that does that: pts = RandomReal[{-1, 1}, {5}]; intx = Interpolation[pts[[All, 1]], Method -> "Spline"]; inty = Interpolation[pts[[All, 2]], Method -> "Spline"]; So now we have two functions, intx and inty, that characterize what the path does. You can plot the path now using: ParametricPlot[{intx[i], inty[i]}, {i, 1, 5}] which give this: Main path considered in this post I knew there was likely some cool differential geometry formula for finding the curvature at any point and I found it at this wikipedia page: $R=\frac{\left(x'[i]^2+y'[i]^2\right)^3}{\left(x'[i] y''[i] - y'[i] x''[i]\right)^2}$ which I can calculate now that I have the interpolation functions from above. Cool, so now I can find the radius of curvature at every point: This shows the instantaneous radius of curvature at every point along the curve. So now I can use the equation above for the velocity at every point and figure out a trajectory, and more importantly, a time to traverse the path, which I’d love to minimize eventually. To be clear, I pick an arbitrary grip force and then calculate the radius of curvature and hence the max speed everywhere and I figure out how long it would take to make the journey. I realized that I’d risk the occasional infinite speed for straight portions of the track so I decided to build in a cap on the speed, that I arbitrarily picked. So how do I figure out the time once I know the speeds. Pretty easily, actually, as for every segment of the path the small time is determined by the distance, $\sqrt{dx^2+dy^2}$ divided by the speed: $t=\int \frac{\sqrt{x'[i]^2 +y'[i]^2}}{v(i)}\,di$ where again i is the parametrization that I used (it just basically counts the original random points) and the speed (v(i)) is calculated as above. Ok, cool, so if you give me a path, I’ll tell you the fastest you could traverse it. But that doesn’t yet let me figure out better paths around corners. To do that I need to generate some other paths to test to see if they’re faster. Remember they might not be as tight of turns (and so likely faster at the curves) but they’re then going to be likely longer. The hope is that we can find an optimum. How do I generate other test paths? Well, for each of the original random points, I perturb the path in a direction perpendicular to the original path (which I’ll start calling the middle of the road). If there’s 5 points, then at each the path will move a little left or right of the center, and I’ll use the spline interpolation again to get a smooth path that connects all those perturbations. So now it’s a 5 dimensional optimization problem. In other words, what is the best combination of those 5 perturbations that yields a path that allows the car to make the whole journey faster. Luckily Mathematica‘s NMinimize function is totally built for a task like this. Here’s what it found: The blue stripe is the road. The blue curve is the middle of the road. The red point travels along the blue curve as fast as it can without slipping. The green curve is the result of the optimization process. The green point moves along the green curve as fast as it can without slipping. Note how in the last curve the red point has to significantly slow down, allowing the green point to win. Cool, huh? Here’s another example that I didn’t have the patience to let NMinimize run (I let it run for 30 minutes before I gave up). It took so long because I used 10 original points, and so it was a 10 dimensional optimization problem. Luckily, just by running some random perturbations I found a significantly better path. Note how it accepts a really tight turn towards the end but it still ends up winning: 10 dimensional optimization example As a last note, I should mentions that making the animations took me a while to figure out. I knew the speed at every point (note, not the velocity!) but I needed to know the position (in 2D) at every point in time. I finally figured out how to do that (obviously). Here’s the command: NDSolve[{D[intx[i[t]], t]^2 + D[inty[i[t]], t]^2 == bestvnew[i[t]]^2, i[0] == num}, {i}, {t, 0, tmax}] where tmax was how long the path takes. Basically I’m solving for how fast I should go from point 1 to the last point (i as a function of time). Then I can just plot the dots at the right location at {intx[i[t]], inty[i[t]]}. That worked like a charm. Alrighty, that’s been my fun for the last few days. Thoughts? Here are some starters for you: • Wow, this is really cool. What I really like is the . . . • Wow, this totally blows. What really makes me mad is . . . • Can I get a copy of the Mathematica document? • Why do you set the initial condition on i to be at the last point instead of the first? (editors note: that took me a long time to get to work, luckily the paths calculated are time reversable) • What do you mean they’re time reversable? • I race for a living and these are way off. Instead what I do is . . . • I want to race for a living now that you’ve given me the tools to win. Where do I send my royalty checks? • It seems to me that the cap on the speed gives you discontinuities in your acceleration. Is that allowed? • I don’t get your NDSolve command at all. What is that differential equation? Posted in mathematica, physics, Uncategorized | 4 Comments Can a pendulum save you? I’m so thankful to my friend Chija for pointing out this video for me: Here’s her tweet When I saw it I started to wonder if angular momentum was enough to explain it. So I set about trying to model it. Here’s my first try: Green ball is 20x the mass of the red. No contact or air friction. It does a pretty good job showing how the fast rotation of the red ball produces enough tension in the line to slow and then later raise the green ball. Here’s a plot of the tension in the line as a function of time: Tension in the line as a function of time. The green line is the strength of gravity. The reason everything is negative is a consequence of how I modeled the constraint (a Lagrange multiplier) So how did I model it? I decided to use a Lagrange multiplier approach where the length of the rope needs to be held constant. Here’s a screenshot of the code: “ms” is a list of the masses. “cons” is the constraint. You define the constraint, the kinetic and potential energies, and then just do a lagrangian differential equation for x and y of both particles: $\frac{\partial L}{\partial x}-\frac{d}{dt}\frac{\partial L}{\partial x'}+\lambda(t)\frac{\partial \text{cons}}{\partial x}=0$ (note that in the screen shot above there’s actually some air resistance added as an extra term on the left hand side of the “el” command). Very cool. But what about the notion that the rope wraps around the bar, effectively shortening the string? I thought about it for a while and realized I could approach the problem a little differently if I used radial coordinates. First here’s a code example of a particle tied to a string whose other side is tied to the post: “rad” is the radius of the bar. Note how the initial “velocities” of the variables need to be related through the constraint. I’ve changed the constraint so that some of the rope is wrapped around the bar according to the angle of the particle. Here’s what that yields: Ok, so then I wanted to feature wrapping in the code with both masses. Here’s that code: Note the negative sign before “l[2][t]” and the “$\theta[2][t]$” in the constraint. And here’s the result, purposely starting the more massive object a little off from vertical: Fun times! Your thoughts? Here are some starters for you: • Why do you insist on using Mathematica for this? It would be much easier in python, here’s how . . . • Some of the animations don’t look quite right to me. Are you sure that . . .? • This is cool, do you plan to do this for your students soon? • What about contact friction between the rope and the bar? I would think that would be a major part. • In the video he just comes to a rest instead of bouncing up. Clearly you’ve done this all wrong. Portfolio SBG My last post talked about a way to have daily quizzes in my Standards-Based Grading (SBG) optics course. It (and the comments) got me thinking about how to do it even better and I think I’m closing in on a better plan. The main idea is to have daily quizzes that are problems randomly selected from the previous day’s work. It reduces the amount of homework I have to grade, and tackles the cheating problem since it’s now a no-notes quiz. I liked it a lot in my fall class and I definitely want to keep those strengths. My suggestion was six problems per day that would act as the only contexts for any future assessments (quizzes, screencasts, oral exams, and office visits). One commenter noted that might be too much to ask the students to absorb from Tuesday to Thursday. Also, I wasn’t too happy about the double quiz I suggested on Tuesdays (one for the previous Thursday material and one to act as a re-assessment of week-old information). So, here’s my new thinking: 1. Assign 3 problems per night 1. Have them be substantial, covering various aspects of what we talk about in class. 2. Each day do a quiz on a randomly selected problem from the previous 6 problems (three each from the last two days of new material). 3. Have the students maintain a portfolio of all the problems so that they can act as context for all future assessments • Finding 3 solid problems sounds much more fruitful (and easy for me) than finding six every day. • I really like the portfolio idea. Want to come improve your standard score? Bring in your portfolio and I’ll randomly ask about one of those three problems. For each of the standards the students will (hopefully) be encouraged to really comprehend the issues around the three problems, especially given that they and I will be encouraged to “turn them inside out” for every assessment. • Before every quiz they should be touching up six problems in their portfolio. Admittedly if the quiz is on one they’re not ready for, they get a crappy grade but they can redo it via screencast, office visit, . . . • Something we’ll go over today might show up next time or the time after that, allowing for some cycling (we will likely discuss the context of the quiz beforehand and often the details of the quiz afterwards, especially if it seems people are unsure how to approach the problem). • Three problems times ~25 standards is a workable number of problems that the students need to master (especially considering that they are in groups of three with common ideas). Certainly it’s easier than six times 25. • The students “only” have to know how to do three problems per day. Master those, and they’re guaranteed an A. I get student evals sometimes that say I need to do some sort of high stakes exam to make sure they really know it. I’ve tended not to heed such advice, but this has me thinking about that again. • There’s a chance that a standard might not ever be quizzed (25% chance, I guess). That means that they’ll need to submit something on their own. I guess I could use my old “one week rule” (here’s a post back when I called it the two week rule) or something. I could also weight the random selections differently to reduce that 25% to, I don’t know, 10% or something. • Hopefully the notion of keeping up a solid portfolio will lower the barrier to having them submit something. • If I had the quiz be on the last 9 problems, there’s an even greater chance that a standard doesn’t ever get quizzed (29.6%) • The days could devolve into “how do we do these three problems” instead of active learning around the content. • Students might want to do their own problems for the oral exams (that’s how I’ve tended to do it) instead of just coming with their portfolio ready. • A compromise could be that I’ll tell them which standard they’re going to be reassessed on and they can polish up those three problems, of which I’ll randomly select one to grill them on. • Another approach could be “bring your whole portfolio to the oral exam and I’ll randomly select anything in there.” I think that would really drive home the notion of keeping up a good portfolio but they might rebel. So that’s where I’m at (for today Your thoughts? Here’s some starters for you: 1. I think 3 is too many/few and that instead you should subtract/add x and here’s why… 2. I’ve taught with a portfolio approach before and here’s where I think your system is going to fail . . . (this is a cue for my friend Bret to weigh in) 3. You definitely should also have assessments that do completely different problems and here’s why . . . 4. How would you teach the students to “turn a problem inside out?” 5. Here’s how I’d solve the 25%-that-won’t-get-quizzed problem . . . 6. I think for the oral exams you should limit what they’ll need to bone up on and here’s why . . . 7. I think for the oral exams you should make everything on the table and here’s why… 8. Why not have every quiz be a random selection from anything in the portfolio? 1. Below is a histogram of running 1000 semesters and finding how many problems would never get quizzed using this approach. The average is just a little over the 25% that I get with my approach above Posted in syllabus creation, teaching, Uncategorized | 8 Comments Daily quiz help I’m preparing my syllabus for my upcoming Physical Optics course and I’d love some feedback on a policy I’m polishing regarding daily quizzes. Here’s a post from last summer laying out what I did in a recent class (general physics 2). For this upcoming class I don’t have 3 days per week, allowing for Mondays to be a reassessment day, so I was thinking of just doing a longer quiz on Tuesdays. Here’s what I was thinking • every day assign 6 problems • randomly select one for the quiz on the next day • on Tuesdays additionally select a problem from two weeks prior In addition I’m thinking that the assigned problems could be the context for both oral exams and office visits. In other words, it’s the only problems they’ll work on. Note, of course, that on all quizzes and exams the problems will be “turned inside out” so that they really represent a type of problem, instead of a specific problem. Ok, first I realize that I have to be super careful selecting the six problems each day. There really can’t be any fillers in there or super hard ones with fancy tricks that’ll only work in weird situations. I’m up for that challenge. Here’s one question I have: In the past class I assigned all new problems for the review day so they really had 6 problems for every standard (4 on the day we “covered” the material and 2 for the review homework). Should I assign 6 every night for this Tuesday-Thursday class? Or should I go with 4 since it doesn’t seem too hard to tackle them from Tuesday to Thursday (admittedly Thursday to Tuesday is easier)? Second question: If a problem is randomly selected, can it be selected again? If so, maybe I should never provide solution sets. I guess I’m leaning toward that already so that they’ll know to just really have a good handle on all the problems (since they could show up anywhere: quiz, oral exam, office visit, etc). I guess I’m right now circling around 6 problems per class and repeats are fine with no solution sets. What are the downsides I’m not seeing? • I’m going to be in this class and I’m really excited about this. Here’s why . . . • I’m going to be in this class, where can I get a drop card? • I think x problems per class is the perfect number, here’s why • Why do you put “covered” in quotes? • If you’re just giving them the problems they have to do, they’re not going to learn since there’s never a surprising question on an exam. You need to assess their understanding, not their ability to refine a fixed set of problems. • Can you give some examples of “turning a problem inside out?” Posted in syllabus creation, Uncategorized | 4 Comments Unstable rotation (spinning handle in space) First, watch this: Cool, huh? My students found this last year when we were studying rigid body rotation. One of the things we did a lot was try to spin a tennis racquet about an axis in the plane of the head and perpendicular to the handle without it rotating about the handle. It turns out it’s pretty hard and the reason is the same as the explanation for the video above. My friend Will posted that vid recently again and I sent him the an animation I made showing a similar result. He asked for a blog post, so here you go. To make it a fun challenge, I wanted to see if I could do it “off the top of my head,” in other words I wanted to see if I could put together the calculation without checking my notes from last spring when I was teaching this stuff (and hence it was all at my fingertips). I knew I couldn’t do all the inertia tensor stuff off the top of my head, so I thought I’d see if I could do it with a small number of masses so that the inertia tensor benefit wasn’t huge. First, I laid out a few point masses to model the handle in the video. I put one at the screw, two at the handle ends, and one at the crossing point. I knew I needed to calculate the location of those points for any arbitrary Euler rotation, so I had to think about Euler rotations first. Basically these are the rotations you can do to an object to put it in any orientation (without changing the center of mass which I put at the origin). It reminded me of the discussions my students and I had about how to do that (before we’d read about Euler rotations) and I decided that sounded easiest: 1. Rotate about the z-axis by $\psi$. 2. Rotate about the y-axis by $\theta$. 3. Rotate about the z-axis by $\phi$. What that does is the usual theta and phi orientation for a direction from the origin and an additional phi rotation of the body around that direction. It’s not how Euler rotations are sometimes presented: 1. Rotate about the z-axis by $\phi$. 2. Rotate about the new y-axis by $\theta$. 3. Rotate about the really new z-axis by $\psi$. It just turns out that’s harder to do numerically since you have to find the new and really new axes. In Mathematica you can do my recipe by: RotationMatrix[$\psi$, {0,0,1}].RotationMatrix[$\theta$, {0,1,0}].RotationMatrix[$\phi$, {0,0,1}].(points you care about) The period is how Mathematica does matrix multiplication (including dot products). Ok, so now I need to find the locations of my 4 points and then take time derivatives recognizing that my time-dependent variables are theta, phi, and psi. The time derivatives produce the velocities that I can use to calculate the kinetic energy as a function of the variables and their time derivatives. Then I’m in business because I can just do the euler-lagrange approach at that point. Here’s a screenshot of the code: The locs are the locations of the dots as described before with the handle screw part being 1 unit long and the handle width being that crazy square root of 3/2 + 0.01 which will make sense below. The m function is the rotation matrix described above. The newlocs function determines where all the points are at some arbitrary theta, phi, and psi and the ke is the kinetic energy (note the D used for derivative). The el function is the Euler-Lagrange operator and the sol command puts it all together, including some initial conditions set to rotate the handle as similar to the video as I could do (note that if you don’t set the psi variable to a little off zero you don’t see the instability). Here’s the result: And here’s an animation looking at the path the screw takes (it’s animated just so the camera can sweep around) I remembered from the intertia tensor analysis that the stable axes of rotation (among the 3 eigenaxes) are the ones with the highest and lowest eigenvalues. So I calculated those and found that when the length is sqrt(3/2) there is not one in the middle. Here’s a comparison with the length both 0.1 above and below that magic length: Cool, huh? I hope Will’s happy. 1. I was in that class and this really helped me understand . . . 2. I was in that class and this was a complete waste of time because . . . 3. I love this! What should I do with my antiquated vpython scripts that couldn’t possibly do this? 4. I hate this! When I flip my tennis racquet it never rotates. 5. What other initial conditions show (or don’t show) that instability? 6. How did you calculate the eigenvalues off the top of your head. What you just happened to know what the eigenaxes were or something? Posted in mathematica, physics, Uncategorized | 4 Comments Best bingo board My son is in the third grade and his math homework is to play games. The other night we played one that really got me thinking. Each player makes a 4×4 board and puts in any even number between 8 and 48 in every box (note there are more then 16 to choose from and that you can have repeats if you like). I just used the first 16 numbers (8-38) randomly on my grid. Then you roll 4 6-sided dice, add up the total, and then double it (so it’s testing low integer adding and doubling for the homework). You play until someone gets four in a row. As we played we both noticed that 28 kept coming up. I had it once on my board and he didn’t have it at all, so it really just kept extending the game. I told him that 28 would be expected to be the most common (avg roll is 3.5 and 3.5 x 4 x 2 = 28) so we got talking about whether next time we should try a board with all 28s. This post is all about what I learned when trying just that. I decided to code up the game in Mathematica (this is the century of the decade of the year of the week of the hour of code after all). The low hanging fruit was to match an all-28 board against a board with random numbers on it without any repeats. It’s low-hanging because not having repeats means I don’t have to teach Mathematica how to make a choice when a repeated number is rolled (see below for my try at that). To simulate a roll I just produce 4 random integers between 1 and 6, add them, and double them. Here’s a plot of the probability of each roll: To check if a bingo (four in a row) happens, I just check the board after each roll for any possible bingos. Instead of playing matches, I just calculated how many rolls it would take to get a bingo for each type of board. Here’s a histogram of 1000 runs for each type (each bin is the count of the runs that took that many rolls to get a bingo for both types of boards). Yellow is for the board with all 28s, blue if a random, non-repeat board. Gray is where they overlap. I was a little surprised by this result. The random boards beat the all 28s board by a fair margin (on average). Did it surprise you? So then I started wondering about better boards. I realized that if I wanted to do boards with some repeats on them, I’d have to teach Mathematica an effective strategy for making decisions. For example, say you rolled a 22 and you had 3 22s on your board. How do you decide where to put your bingo marker? What I decided to go with was to go for the spots that help out as many potential bingos as possible. That means corners and the inner square are worth more than non-corner edges. What I mean is that a corner spot could be a part of 3 potential bingos (left-right, down-up, and diagonal). The same is true for the inner square. But the non-corner edge spots only have left-right and up-down. So, if given a choice, it’ll go with one of the better ones. If all choices are in the same sort of spot (either all good or all slightly-less-good) then just do it randomly. However, if any of the choices gives you a bingo, I go with that one. First I tried boards with randomly selected possibilities on each space. This allowed for repeats, since each space re-ran the random selection. Then I made boards where the randomness just mentioned was weighted by the probability expectation seen above. Here’s a comparison of all 4 types of boards: It’s really interesting to see that the all 28s board is the worst, on average, even though we expected it to be better based on our (very limited) experience. It’s also interesting to see that the average number of rolls for a bingo is half as much for the weighted random (with repeats) board. So what’s the best board? I don’t know, but what I did was generate 100 weighted-random boards and play 100 games with each. I then looked for the one with the lowest average. Here’s the winning board: 26 40 24 22 18 36 34 18 38 30 36 26 32 20  26 34 And here’s a histogram of running that board 1000 times: Note that once it got a bingo in four consecutive rolls! Also note that the board doesn’t have any 28s in it! Ok, that’s my fun for the week/day/hour of code. I hope you enjoyed it. Thoughts? Here are some starters for you: • I’m in your son’s class, thanks! But I tried your best board and my friend beat me once. Therefore this is all wrong. • I’m your son’s teacher and I really wish you hadn’t posted this. Now every single time my students play they tie since they always use the same board. • I’m a lawyer at a Bingo ™ board manufacturer. I need your mailing address to send a cease and desist letter. • Here’s a better idea for an algorithm to deal with the choices that need to be made when you have a repeat board, because the one you used is dumb. • Thanks for this! Now I can quit school and stick it to the casinos! • Why did you only run 100 boards at the end. What, you didn’t want to stay up even later on a Friday night to let it run longer? Wimp. • I don’t believe this. The all 28s board should have trumped everything. You must have a mistake in your code. Posted in fun, math, mathematica, parenting, Uncategorized | 9 Comments
{}
In a ∆ABC, P and Q are points on sides AB and AC respectively, Question: In a ∆ABC, P and Q are points on sides AB and AC respectively, such that PQ || BC. If AP = 2.4 cm, AQ = 2 cm, QC = 3 cm and BC = 6 cm, find the AB and PQ Solution: It is given that $A P=2.4 \mathrm{~cm}, A Q=2 \mathrm{~cm}, Q C=3 \mathrm{~cm}$ and $B C=6 \mathrm{~cm}$. We have to find $A B$ and $P Q$. So $\frac{A P}{P B}=\frac{A Q}{Q C}$ (by Thales theorem) Then $\frac{2.4}{P B}=\frac{2}{3}$ $\Rightarrow 2 P B=2.4 \times 3 \mathrm{~cm}$ $\Rightarrow P B=\frac{2.4 \times 3}{2} \mathrm{~cm}$ $=3.6 \mathrm{~cm}$ Now $A B=A P+P B$ $=2.4+3.6 \mathrm{~cm}$ $=6 \mathrm{~cm}$ Since PQBC, AB is a transversal, then ∠APQ = ∠ABC  (corresponding angles) Since PQBC, AC is a transversal, then ∠AQP = ∠ACB  (corresponding angles) In ∆APQ and ∆ABC, ∠APQ = ∠ABC   (proved above) ∠AQP = ∠ACB   (proved above) so, ∆ APQ ∼ ∆ ABC  (Angle Angle Similarity) Since the corresponding sides of similar triangles are proportional, then APAB = PQBC = AQAC APAB = PQBC2.46 = PQ6so, PQ = 2.4 cm
{}
### Object detection In this section, we will learn how we can use the function regionprops in MATLAB to detect various properties of objects that can be detected using steps in the previous sections. The input to the regionprops function is a binary image, with the object marked in white and the background marked in black. This command creates a 'struct' variable in which it stores various properties for every region. The default properties are Area, Centroid, BoundingBox. The function also allows extracting a wide range of other properties that can be found in the MATLAB help or by typing in doc regionprops in the MATLAB command window. Here's the syntax for using the function regionprops. % Code to extract default properties of objects Stats = regionprops(out2); % Code to extract only centroids of objects Stats = regionprops(out2, 'Centroid');
{}
# NDsolve of partial differential equation - integral of the function [duplicate] I am try to solve the following differential equation numerically: $$\frac{\partial p}{\partial t}=\frac{\partial^2 p}{\partial x^2}-e^{-t}\frac{\partial p}{\partial x}$$ with boundary conditions $p(x=0,t)=p(x=L,t)=0$ and initial condition $p(x,0)=\delta(x-x_0)$. I am also interested in computing the quantity: $$q(t)=\int_0^L p(x,t)dx$$ which satisfies the equation: $$q(t)=\frac{\partial p}{\partial x}(L,t)-\frac{\partial p}{\partial x}(0,t)$$ with initial condition $q(0)=1$. I have tried the following in Mathematica: NDSolve[{D[p[x, t], {t, 1}] == D[p[x, t], {x, 2}] - Exp[-t] D[p[x, t],x], p[0, t] == 0, p[L, t] == 0, p[x, 0] == 1/Sqrt[2 Pi sigma^2] Exp[-(x-x0)^2/(2 sigma^2)], q'[t]==D[p[L, t], x] - D[p[0, t], x],q[0]==1}, {p[x, t],q[t]}, {x, 0, L}, {t, 0, T}] Note that I have approximated the initial delta-function condition with a Gaussian. When I run this code I get the error: Function::fpct: Too many parameters in {x,t} to be filled from Function[{x,t},1][t]. Could you help me to understand why? Thank you • What are the numerical values for x0, L, T and sigma? – zhk May 15 '17 at 16:32 • What is p^{0, 1}[L, t] ? – rhermans May 15 '17 at 16:32 • I use sigma=1/32, L=10, T=1. The derivative p^{1,0} is the first derivative with respect to x (there was a mistake in the previous formula), I don't know how to write it properly here sorry – Andrea May 15 '17 at 16:34 • I think you mean D[p[x, t], {x, 1}] ? – rhermans May 15 '17 at 16:36 • it's D[p[x,t],{x,1}], but evaluated at x=0 and x=L, I have now updated the code above thank you (the error is still the same) – Andrea May 15 '17 at 16:36 sigma = 1/32; x0 = 1; L = 10; T = 1;
{}
95-531 Elliott H. Lieb, Lawrence E. Thomas Ground state energy of the strong-coupling polaron (25K, LaTeX) Dec 14, 95 Abstract , Paper (src), View paper (auto. generated ps), Index of related papers Abstract. The polaron has been of interest in condensed matter theory and field theory for about half a century, especially its strong coupling limit. It was not until 1983, however, that a proof of the asymptotic formula for the ground state energy was finally given by using difficult arguments involving the large deviation theory of path integrals. Here we derive the same asymptotic result, $E_0\sim -C\alpha^2$, and with explicit error bounds, by simple, rigorous methods applied directly to the Hamiltonian. Our method is easily generalizable to other settings, e.g., the excitonic and magnetic polarons. Files: 95-531.tex
{}
# Homework Help: Inequality Question 1. Dec 4, 2011 ### Punkyc7 is $\frac{x-1}{x}$<ln(x)<x-1 valid for 0<x<1 I think it is I just want to get a second opinion. 2. Dec 4, 2011 ### hunt_mat One way to look at these things is to examine the functions: $$f(x)=\frac{x-1}{x}-\ln x\quad g(x)=\ln x-(x-1)$$ and compute the derivatives f'(x) and g'(x) and examine if f(x) and g(x) are increasing or decreasing functions and examine the values for certain points and the inequality should drop out.
{}
# zbMATH — the first resource for mathematics Generalized Folding algorithm for transient analysis of finite QBD processes and its queueing applications. (English) Zbl 0862.60088 Stewart, William J. (ed.), Computations with Markov chains. Proceedings of the 2nd international workshop on the numerical solution of Markov chains, Raleigh, NC, USA, January 16–18, 1995. Boston, MA: Kluwer Academic Publishers. 463-481 (1995). Summary: We propose and implement a generalized Folding-algorithm for the transient analysis of finite QBD processes. It is a numerical method for the direct computation of $${\mathbf x}{\mathbf P}={\mathbf a}$$, where $${\mathbf P}$$ is the QBD generator matrix in block tridiagonal form. Define the QBD state space in two dimensions with $$N$$ phases and $$K$$ levels, so that $${\mathbf P}\in{\mathcal R}^{NK\times NK}$$ and $${\mathbf x},{\mathbf a}\in{\mathcal R}^{J\times NK}$$, $$\forall J$$. The time complexity of the algorithm for solving $${\mathbf x}{\mathbf P}={\mathbf a}$$ is the greater of $$O(N^3\log_2K)$$ and $$O(N^2KJ)$$. The algorithm is found to be highly stable with superior error performance. In numerical studies we analyze the transient performance of MMPP/M/1 queueing system with finite buffer capacity. The MMPP arrival process is constructed to reflect the diversity of the second-order input statistics. We examine the effect of the second-order input statistics on transient queueing performance. For the entire collection see [Zbl 0940.00042]. ##### MSC: 60K25 Queueing theory (aspects of probability theory) 90B22 Queues and service in operations research
{}
Three moving iron type voltmeters are connected as shown below. Voltmeter readings are $V$, $V1$ and $V2$ , as indicated. The correct relation among the voltmeter readings is 1. $V=\dfrac{V1}{\sqrt{2}}+\dfrac{V2}{\sqrt{2}}$ 2. $V =V_1 +V_2$ 3. $V =V_1V_2$ 4. $V =V_2 - V_1$
{}
# What are the minimum required given sets of information to complete an ICE chart for an equilibrium reaction? I'm considering creating a program that automatically completes an ICE chart/table when the minimum given information is provided. The reason I'm posting my question here is because I need to know the possible cases of minimum information required before I can even start the program. I know you can complete the table if you're given these following sets of information: K and initial concentrations K and final concentrations K and initial concentrations of reactants and final concentrations of products I know there are a ton, more, but I'm not sure how to complete the list. Thanks for the help ahead of time! • I'm not thinking about Ksp and Ka. Only just Keq. – Asker123 Mar 17 '15 at 21:34 Alright I think you have the basics down, first of all I have a couple pointers for you since I have done many (many) RICE charts. What I would do if I were you since I have some experience in programming I would just use molarities of the initial and final. You must make the user input the data, say if the user was given moles then make him/her convert it to molarity. I think you have it pretty much down, you need either the initial concentrations and K or find the K through using the initial concentrations of the products and reactants or find the initial using the product and K. I mean then again I would work through making this first part down because those are the essentials (atleast what I've learned) and then work on making updates to that program. Since I do have programming experience I would just use if statements to do the entire thing. If this is available then solve for this. Don't worry about the entire thing yet. Otherwise I would have to say you have pretty much everything down. NOTE: This is not about Ka, Ksp, Kb, just for Keq. Because then you would have to work with the pH and pOH more equations. Hopefully I helped you. Good Luck!
{}
Alternating forms as skew-symmetric tensors: some inconsistency? My trouble is best described by the following diagram: $$\begin{array}{ccccc} \mathrm{Alt}^k V &\stackrel{\sim}{\rightarrow}& (\Lambda^k V)^* &\stackrel{\sim}{\rightarrow}& \Lambda^k V^* \cr i \downarrow &&&& \downarrow \mathrm{Sk}\cr \mathrm{Mult}^k V &\stackrel{\sim}{\leftarrow} & (\otimes^k V)^* & \stackrel{\sim}{\leftarrow} & \otimes^k V^* \end{array}$$ The problem is that this diagram is not commutative but let me explain the terminology first. Here $\mathrm{Alt}^k V$ and $\mathrm{Mult}^k V$ are the spaces of alternating and multilinear $k$-forms, respectively, on the vector space $V$. All the horizontal isomorphisms are canonical. The left vertical arrow is the inclusion of the alternating forms in the multilinear ones. The only "questionable" arrow is the right-hand vertical one ($\mathrm{Sk}$, following the notation in Birkhoff-MacLane "Algebra", Section XVI.10). It is given as an extension of the following alternating map $$(*)\quad (v_1^*, \ldots, v_k^*) \mapsto {1\over k!} \sum_{\sigma\in S_k} (-1)^{\sigma} v_{\sigma(1)}^* \otimes \cdots \otimes v_{\sigma(k)}^*.$$ If the characteristic is zero (which I assume) then Sk is an inclusion. There are two good things about this inclusion: First, it is a linear inversion of the canonical projection modulo the graded ideal generated by squares of elements of grade 1, i.e. of $\otimes^k V^* \rightarrow \Lambda^k V^*$. Second, if $\mathrm{Sk'}$ is a map $\otimes^k V^* \rightarrow \otimes^k V^*$ which is again obtained by extending a multilinear map (*), then we have $$Sk(a \wedge b) = Sk'(Sk(a)\otimes Sk(b))$$ (i.e. to learn what $a\wedge b$ is you map both to tensors via $Sk$ and then antisymmetrize their tensor product in the tensor algebra). So the above argument suggests that $Sk$ is somewhat canonical as well. However, here is a strange situation. Suppose that $e_1, \ldots, e_n$ are the basis of $V$. Then consider the alternating 2-form that operates on $V\times V$ as follows: $$f(v_1, v_2) = e_1^*(v_1) e_2^*(v_2) - e_1^*(v_2) e_2^*(v_1)$$ Its image in $\Lambda^k V^*$ is $e_1^* \wedge e_2^*$, and thus under $Sk$ it gets mapped to $${1\over 2} (e_1^* \otimes e_2^* - e_2^* \otimes e_1^*)$$ Therefore applying the other two bottom isomorphisms we arrive at a multilinear form that operates on $V\times V$ as follows: $$g(v_1, v_2) = {1\over 2} (e_1^*(v_1) e_2^*(v_2) - e_1^*(v_2) e_2^*(v_1))$$ Clearly $g\neq f$ and this is precisely the non-commutativity I was talking about. Can somebody explain if I made a mistake somewhere? And if not, why then so many physicists happily use "skew-symmetric tensors" and refuse to use "differential forms" and still arrive to the very same answers never loosing coefficient like $1\over 2$? Thanks in advance! This looks really puzzling to me and I know this is too easy for MO, but I am in a situation much different from the rest of MO having zero mathematicians around to ask such silly questions to. Again, thanks for reading! Added later: As Andrew and Georges point out it is easy to make the diagram commute by either redefining the $\mathrm{Sk}$ without the ${1\over k!}$ or by changing $(\Lambda^k V)^* \rightarrow \Lambda^k V^*$ from the $\mathrm{det}$-map to ${1\over k!}\mathrm{det}$. Let me explain why I think why either approach is confusing. First, redefining the $\mathrm{Sk}$ map as Georges suggests revokes its first property: namely it is no longer a right inverse of the projection $\otimes^k V \to \Lambda^k V$. On the other hand, map $(\Lambda^k V)^* \rightarrow \Lambda^k V^*$ determines what we call a wedge-product in the graded algebra $\mathrm{Alt}^* V$ (since the wedge-product in $\Lambda^* V^*$ canonically comes from $\otimes^* V^*$ via projection). Therefore, if we are to redefine the meaning of $(\Lambda^k V)^* \rightarrow \Lambda^k V^*$ as Andrew proposes, then we have to agree that now $$(* *)\quad (dx \wedge dy) (\partial_x, \partial_y) = {1\over 2},$$ which I think many will find somewhat weird. (Although, it seems things like Stokes theorem do not depend on the agreement (**), right?) To sum up: if we agree what $\wedge$ means in $\mathrm{Alt}^k$ then this determines the definition of $\mathrm{Sk}$. And thus with the usual definition of $\wedge$ in $\mathrm{Alt}^k$ we unfortunatelly obtain the $\mathrm{Sk}$ which is not the right-inverse of the projection. Am I correct in this? - Perhaps the discussion at physicsforums.com/showthread.php?p=2025445 is helpful? –  Hans Lundmark Sep 1 '10 at 9:18 I assume that in addition to working in characteristic $0$, your vector space $V$ is finite-dimensional? Otherwise $(V^*)^{\otimes 2} \neq (V^{\otimes 2})^*$. +1 for the question, BTW: these conventions matter in mathematics, too, and it's easy to be off by $k!$. –  Theo Johnson-Freyd Sep 1 '10 at 18:03 Comments on the addendum: 1: There is no map $(\Lambda^k V)^* \to \Lambda^k V^*$. A map can be constructed in the opposite direction, but it always goes via something else. This can be seen by the fact that in the determinant map (for example), there's no a priori reason why either of the terms has to be alternating. 2: $Alt^k(V)$ already has a product, it doesn't need one imposed from any isomorphism. 3: Keep those diagrams separate! After a little more reflection, I think that the problem is that $Sk_{V^*}$ is not $p_V^*$. I recommend that you try to understand that statement. –  Loop Space Sep 1 '10 at 18:55 Andrew, sorry for being equivocal. 1. By the map $(\Lambda^k V)^* \rightarrow \Lambda^k V^*$ I mean the inverse of your $\mathrm{det}$ map (which exist since I assume my $V$ to be finite dimensional). 2. You are right that algebra $\mathrm{Alt}^* V$ has a product, but the reason we denote it $\wedge$ is because the map $\mathrm{Alt}^* V$ in the top-row of my diagram is an isomorphism of graded algebras. 3. I am not sure what you mean $\mathrm{Sk}_{V^*}$ and $p^*_V$ go in the opposite directions (and are the inverses of each other). Again, thanks for clarifying discussion! –  Paul Yuryev Sep 2 '10 at 1:30 To elaborate on 2: if you change the top-row map then it is natural to change the definition of $\wedge$ in $\mathrm{Alt}^* V$ to preserve the isomorphism of algebras. For this reason I said before that the choice of $(\Lambda^k V)^* \rightarrow \Lambda^k V^*$ implies the choice of $\wedge$ in $\mathrm{Alt}^* V$. BTW, I just realized that convention (**) also affects differentials (i.e. d(x dy) is now a different 2-form) and volumes of Riemannian manifolds (so that the volume of unit 2-sphere becomes 2\pi). –  Paul Yuryev Sep 2 '10 at 1:36 I can't speak to what is actually used, particularly what is used by physicists! However, I can try to shed some light on the diagram and the maps in question. In actual fact, there are two diagrams here and you are conflating them. This, simply put, is the source of the confusion. Let me expand (at a bit more length than I intended!) on that. Firstly, there are too many maps flying around and some are more canonical than others. The most canonical is the identification of $(\bigotimes^k V)^*$ with $\operatorname{Mult}^k(V)$ since this is by (one of the) definition(s) of the tensor product. So let us start with that. The inclusion $\operatorname{Alt}^k(V) \to \operatorname{Mult}^k(V)$ is probably next in line since it is the inclusion of a subspace. After that, I'd put the map $\bigotimes^k V^* \to (\bigotimes^k V)^*$. So, so far we have a diagram: $$\begin{array}{ccccc} \operatorname{Alt}^k V \\ i \downarrow \\ \operatorname{Mult}^k V &\leftarrow & (\otimes^k V)^* & \leftarrow & \otimes^k V^* \end{array}$$ That the horizontal maps are isomorphisms is nice, but only holds for finite dimensional vector spaces so I'm not going to write in the fact that they are isomorphisms. I want to emphasise what's really canonical and what's not. Now let us consider $(\Lambda^k V)^*$. We appear to have a canonical map from this to $\operatorname{Alt}^k(V)$ but in fact, we don't. We have a canonical map from this to $(\bigotimes^k V)^*$ given by: $$f(v_1 \otimes \cdots \otimes v_k) = f(v_1 \wedge \cdots \wedge v_k)$$ This is dual to the projection map $\bigotimes^k V \to \Lambda^k V$. That projection map is pretty canonical as we usually define $\Lambda^k V$ as a quotient of $\bigotimes^k V$. Taking its dual is a natural thing to do, so this also appears on my list of "canonical maps". Now when we go "down" and "across" we happen to end up in the subspace $\operatorname{Alt}^k(V)$ so we can add a horizontal arrow $(\Lambda^k V)^* \to \operatorname{Alt}^k(V)$ if we like, but the new map that we add by doing this is one step removed from the really canonical maps so I'm going to leave it out at this stage. Now we come to $\Lambda^k V^*$. This is, as for $\Lambda^k V$, defined as a quotient of the tensor product. So we have a projection $\bigotimes^k V^* \to \Lambda^k V^*$. This, again, is pretty canonical. So our "canonical" diagram looks like this: $$\begin{array}{ccccc} \operatorname{Alt}^k V && (\Lambda^k V)^* && \Lambda^k V^* \cr i \downarrow &&{p_V}^* \downarrow&& \uparrow p_{V^*}\cr \operatorname{Mult}^k V &\leftarrow & (\otimes^k V)^* & \leftarrow & \otimes^k V^* \end{array}$$ At this point, an obvious question is as to whether or not we can fill in the gaps. I've already said that we can in the top-left. Can we in the top-right? That is, is there a map $\Lambda^k V^* \to (\Lambda^k V)^*$ making the diagram commute? (Thinking about infinite dimensions says that this is the correct direction.) The answer is: (drum roll) No. And the reason is quite simply that we start in $\bigotimes^k V^*$ and can choose any element there as our starting point, but would want to end up in the alternating part of $(\bigotimes^k V)^*$. Okay, now we throw in the Alternator (probably time for another drum roll). The Alternator does what it says on the tin: it alternates stuff. But we have to be careful and ensure that we only apply it to stuff that can genuinely be alternated. So we have an alternator: $\operatorname{Alt} \colon \operatorname{Mult}^k(V) \to \operatorname{Alt}^k(V)$ given by $$\operatorname{Alt}(f)(v_1,\dotsc,v_k) = \frac{1}{k!} \sum (-1)^{\sigma} f(v_{1\sigma}, \dotsc, v_{k\sigma})$$ The $1/k!$ is to make this a left inverse of the inclusion $\operatorname{Alt}^k(V) \to \operatorname{Mult}^k(V)$. We also have an alternator $\Lambda^k V \to \bigotimes^k V$ given by: $$v_1 \wedge \dotsb \wedge v_k \mapsto \frac{1}{k!} \sum (-1)^{\sigma} v_{1\sigma} \otimes \dotsb v_{k\sigma}$$ Again, the multiplier is chosen to ensure that this is a right inverse of the projection map. This is your $Sk$ map. Putting these into a diagram, we get: $$\begin{array}{ccccc} \operatorname{Alt}^k V && (\Lambda^k V)^* && \Lambda^k V^* \cr \operatorname{Alt} \uparrow &&{Sk_V}^* \uparrow&& \downarrow Sk_{V^*}\cr \operatorname{Mult}^k V &\leftarrow & (\otimes^k V)^* & \leftarrow & \otimes^k V^* \end{array}$$ Again, the obvious question is: can we fill in the gaps? We can fill in the first one. Indeed, the same filler map works in this diagram as in the last. That was the map $\alpha \colon (\Lambda^k V)^* \to \operatorname{Alt}^k(V)$ with the property that $i \alpha = \eta {p_V}^*$ (where $\eta \colon (\bigotimes^k V)^* \to \operatorname{Mult}^k(V)$ is the isomorphism). So: $$i \alpha (Sk_V)^* = \eta {p_V}^*(Sk_V)^* = \eta (Sk_V p_V)^* = \eta\;\text{and}\; i \operatorname{Alt} \eta = \eta$$ Thus, as $i$ is an injection, $\alpha (Sk_V)^* = \operatorname{Alt} \eta$. But it's the other gap that's more interesting. Now we can fill it in. And the "filler" map is laid out for us already: it's simply follow-the-arrows. If we work it out in detail, it's the following map: \begin{aligned} f_1 \wedge \dotsb \wedge f_k \mapsto \Big((v_1 \wedge \dotsb \wedge v_k) \mapsto & Sk_{V^*}(f_1 \wedge \dotsb \wedge f_k) \big( {Sk_V}^*(v_1 \wedge \dotsb \wedge v_k)\big)\Big) \\ &= \frac{1}{k!} \frac{1}{k!} \sum_\sigma \sum_\tau (-1)^{\sigma} (-1)^{\tau} f_{1\sigma}(v_{1\tau}) \dotsb f_{k\sigma}(v_{k\tau}) \end{aligned} This simplifies considerably by rewriting $f_{j\sigma}(v_{j\tau})$ as $f_{j\rho}(v_j)$. Then we end up with $k!$ of each term, so we get: $$(f_1 \wedge \dotsb \wedge f_k)(v_1 \wedge \dotsb \wedge v_k) = \frac{1}{k!} \operatorname{det}(f_i(v_j))$$ But notice the factor of $1/k!$ in this! So to make that right-hand rectangle commute, one of the maps has to have a factor of $1/k!$ in it. It doesn't have to be the top one, but that's the most obvious one since if you modify one of the $Sk$s then you ought to modify the other one - though there's no reason to do so, and in fact this might be what's going on: the physicists are keeping one of the $Sk$s as it is and defining the other one to be suitably scaled so that the upper map is the determinant map. But that's speculation, returning to reality we have a diagram: $$\begin{array}{ccccc} \operatorname{Alt}^k V && (\Lambda^k V)^* &\stackrel{\frac{1}{k!}\operatorname{det}}{\leftarrow} & \Lambda^k V^* \cr \operatorname{Alt} \uparrow &&{Sk_V}^* \uparrow&& \downarrow Sk_{V^*}\cr \operatorname{Mult}^k V &\leftarrow & (\otimes^k V)^* & \leftarrow & \otimes^k V^* \end{array}$$ Finally, let's compare this to your original diagram. The key thing to notice is that in my diagrams, I have two vertical maps in one direction and one in the other. In your diagram, you have two vertical maps in the same direction (and are missing the third). But whichever of my diagrams you prefer, one of your maps is going in the wrong direction. So, in conclusion, the mistake is that your diagram isn't supposed to commute. Rather, there's two commuting diagrams there with some maps from one diagram and some from another. (I have a feeling that I haven't really answered the question. This was what I wrote out when trying to make sense of the question rather than towards an answer. But I hope that it helps clarify the issue for you.) - Dear Andrew, Your answer looks great from all points of view, but I have a tiny typographical problem with it: one line is MUCH longer than all the others. Wouldn't it be possible to break it? –  Pierre-Yves Gaillard Sep 1 '10 at 10:50 Pierre: Sure, but you'll have to tell me which line as it all looks fine to me (I get my MathJaX served as MathML which may make a difference to how it looks). –  Loop Space Sep 1 '10 at 11:22 (Switched to HTML+CSS to see which it was and tried to fix it. Hope that's acceptable now. The MathML rendering really does look much, much nicer!) –  Loop Space Sep 1 '10 at 11:33 Thanks a lot!!! It's MUCH better! (At least to me.) [The 2 lines stick out a little bit, but that's ok. And that's not your fault, that's math's fault: these formulas ARE nasty.] [If this helps, I'm using Firefox (sometimes Safari or Chrome) on a MacBook Pro. Your post looks the same on all three.] –  Pierre-Yves Gaillard Sep 1 '10 at 11:46 Pierre: In firefox, right-click on a piece of maths. Select "Settings->Math Renderer->MathML" and it'll all look right again! Slightly more seriously, when trying to break them I realises that actually it might be better expressed as two completely separate lines, but there's a limit to the amount of time I can spend on such matters. –  Loop Space Sep 1 '10 at 12:32 Dear Paul, first of all let me congratulate you for the extremely clear formulation of your interesting question (which is not silly at all, contrary to what you say): +1. The source of your trouble is the identification $Sk$: it is not the correct one.Why the cocksure statement? Because Laurent Schwartz wrote it! (In his book Les Tenseurs Hermann, 1975). Talking of (an analagon) of $Sk$, he writes on page 61:"il se trouve que cette identification n'est pas la bonne" (it happens that this identification is not the right one). He recommends, for a vector space $E$, the embedding $\Sigma k:\Lambda ^k E\to \otimes^k E$ where $\Sigma k=k! Sk$ (in your notation).The image of $\Sigma k$ is exactly the subspace $A^k E \subset\otimes ^k E$ consisting of antisymmetric tensors This is valid in all characteristics $\neq 2$ and has the crucial advantage that it preserves products: for $\alpha \in \Lambda ^k E$ and $\beta \in \Lambda ^l E$, he proves $\Sigma (\alpha \wedge \beta)=\Sigma (\alpha)\otimes _a \Sigma (\beta)$, where $\otimes_a$ is the antisymmetric product defined on antisymmetric tensors and yielding antisymmetric tensors (This product is defined with shuffle permutations).The calculation is on page 104. I find it very satisfying that Schwartz's point of view solves your trouble, without having to resort to ad hoc trickery. By the way, the book I mentioned is in French but many distinguished anglophones on this site have claimed energetically that mathematical French is no problem for an English-speaking person. I'll let you be the judge! COMPLEMENT In case somebody is interested, here is the formula for the antisymmetric product I alluded to. Let $\alpha \in A^k E \subset \otimes ^k E$ and $\beta \in A^l E \subset \otimes ^l E$ be antisymmetric tensors. Their antisymmetric product is the antisymmetric tensor $\alpha \otimes_a \beta \in A^{k+l }E \subset \otimes ^{k+l} E$ defined by $\alpha \otimes _a \beta=\sum sgn(s) s\bullet(\alpha \otimes \beta)$ (the sum is over shuffle permutations $s$, i.e. permutations with $s_1 < \ldots < s_k$ and $s_{k+1} < \ldots < s_{k+l}$ The bullet denotes the action of the symmetric group on tensors.) Of course Schwartz emphasizes that it is not a great idea to use antisymmetric tensors, which form a subspace of the tensor product: it is far better to use exterior products which are a quotient of said tensor product. - The formula for the antisymmetric product also appears in Warner's "Foundations of differentiable manifolds and Lie groups". It's Formula (2) p. 60. You can view it as follows: go to amazon.com/Foundations-Differentiable-Manifolds-Lie-Groups/dp/…, then choose "Click to LOOK INSIDE", then type "shuffle" in the box "Search Inside This Book", then go to p. 60. –  Pierre-Yves Gaillard Sep 1 '10 at 13:26 Georges, thanks! BTW, Birkhoff-MacLane in Section XVI.10 where they discuss $\mathrm{Sk}$ never use the fact that $\mathrm{char} F=0$ and all their statements hold true without ${1\over k!}$ and under the assumption $\mathrm{char} F \neq 2$ (in particular, they never state that $\mathrm{Sk}$ is the right inverse of the projection). So I think they also hiddenly agree with Schwartz! –  Paul Yuryev Sep 1 '10 at 16:38 I think the following holds. Let $K$ be a commutative ring and $V$ a $K$-module. Then there is a unique $K$-linear map $f$ from $\bigwedge V$ to $\bigotimes V$ mapping $v_1\wedge\cdots\wedge v_n$ to the sum over $\sigma\in S_n$ of the $\epsilon_\sigma\,\sigma(v_1\otimes\cdots\otimes v_n)$. Moreover $f(a\wedge b)=f(a)*f(b)$, where $*$ is the "shuffle product" described in Georges's answer. –  Pierre-Yves Gaillard Sep 3 '10 at 4:28 I think the problem might be with your definition of $\wedge$. If you look at chapter 7 in volume 1 of Michael Spivak's Differential Geometry, you will see that the way to define $\wedge$ from $\operatorname{Sk}$ (which he calls $\operatorname{Alt}$) involves some combinatorial factors that makes $\wedge$ have all the nice properties, including associativity. Indeed, if $\alpha \in \Lambda^k V^*$ and $\beta \in \Lambda^\ell V^*$, then their wedge product is defined as $$\alpha \wedge \beta = \frac{(k+\ell)!}{k!\ell!} \operatorname{Sk}(\alpha\otimes\beta).$$ In particular, if $\alpha,\beta \in V^*$, one has $$\alpha \wedge \beta = \alpha\otimes\beta - \beta\otimes\alpha$$ and not half of that. - One can also look at Section 2.10 p. 59 of "Foundations of Differentiable Manifolds and Lie Groups" by Frank W. Warner. –  Pierre-Yves Gaillard Sep 1 '10 at 10:06 There are two natural definitions of $\wedge$, one with one set of combinatorial factors, one with another, and both are associative. You can use $\alpha \wedge \beta = \operatorname{Sk}(\alpha\otimes \beta)$ if you want, provided you pick the correct embedding $\bigwedge^k \hookrightarrow \bigotimes^k$. The equivalence (in char=$0$) can be seen by applying the map ${\rm T}W = \bigoplus_{k=0}^\infty W^{\otimes k} \to {\rm T}W$ that acts as $k!$ on the $k$th piece. This is not an algebra homomorphism on ${\rm T}W$, but respects $\bigwedge W$, and intertwines its two multiplications. –  Theo Johnson-Freyd Sep 1 '10 at 18:19
{}
# Bug Hunting with Mercurial In this article, we will take a look at a technique for bug hunting in Open Source projects by using version tracking information. In this particular case, we will look at Firefox and their Mercurial setup. By identifying patches that are connected to bugs with public reading turned off, we are able to identify specific fixes for potential security issues in a major web browser, often before releases are pushed. This is also an excellent way of coming up with Proof of Concept code for N-day bugs. # Browsing the Repos Mozilla keeps their Mercurial setup here https://hg.mozilla.org/releases. Go ahead and locate the mozilla-beta repository located here. You should notice the diff button on the left side of each commit in these repositories. By clicking any of these, we can seee exactly what was modified during that commit. You should also notice that some of the commits are clearly marked with links to bugzilla.mozilla.org. If you click on a few of these links, you will undoubtedly encounter a permission denied page like the one below. Here’s an example of a patched race condition in the extended support release (ESR). As you can see, this is a quick and easy way of identifying offending code with security implications without using a tool like Meld or WinMerge. Both of these tools are excellent, and I use them all the time during vulnerability research. # Automating the Process While browsing the commits earlier, you should have noticed that new commits that are not bug fixes are clearly marked as no bug in the description. Bug fixes are clearly marked with Bug (link to bug). We will use BeautifulSoup and urllib3 in Python for scraping specific branches and identifying potential security patches. For OPSEC reasons, the script uses privoxy and tor for tunneling requests. sudo apt-get install tor privoxy sudo bash -c "echo 'forward-socks4a / localhost:9050 .' >> /etc/privoxy/config" sudo service tor restart sudo service privoxy restart Happy Hunting!
{}
# Kolmogorov-Smirnov test strange output I am trying to fit my data to the one of the continuous PDF (I suggest it to be gamma- or lognormal-distributed). The data consists of about 6000 positive floats. But the results of the Kolmogorov-Smirnov test completely refute my expectations providing the very low p-values. Data empirical distribution Distribution fitting Python code: import numpy import sys import json import matplotlib.pyplot as plt import scipy from scipy.stats import * dist_names = ['gamma', 'lognorm'] limit = 30 def distro(): #input file with open(sys.argv[1]) as f: #output results = {} size = y.__len__() x = scipy.arange(size) h = plt.hist(y, bins=limit, color='w') for dist_name in dist_names: dist = getattr(scipy.stats, dist_name) param = dist.fit(y) goodness_of_fit = kstest(y, dist_name, param) results[dist_name] = goodness_of_fit pdf_fitted = dist.pdf(x, *param) * size plt.plot(pdf_fitted, label=dist_name) plt.xlim(0, limit-1) plt.legend(loc='upper right') for k, v in results.iteritems(): print(k, v) plt.show() This is the output: • p-value is almost 0 'lognorm', (0.1111486360863001, 1.1233698406822002e-66) • p-value is 0 'gamma', (0.30531260123096859, 0.0) Does it mean that my data does not fit gamma distribution?.. But they seem so similar... • With so many data points, the standard error of the KS statistic is very small, and so the fact that it's visually a reasonable fit is irrelevant - the test can still tell it doesn't fit. But note that you're misapplying the Kolmogorov Smirnov test, since it's a test for a completely specified distribution and you're estimating parameters from the data. In any case it's not clear to me why you'd do a hypothesis test here. Do you really believe the true population distribution is exactly gamma or lognormal? Why? What would convince you of that rather than something else that looks like that?... – Glen_b Nov 3 '13 at 2:44 • (ctd)... and if you think it's only an approximation, why wouldn't you anticipate rejection in a large sample? If you're interested in 'is this a good approximation?' try looking at QQ plots, which will tell you where the devaitions occur and that may help you decide if it's 'near enough' for whatever purpose you'd want to specify an approximate distributional form for. – Glen_b Nov 3 '13 at 2:47 • Thank you for responce. The aim of my work is the comparing of the several empirical distributions (kind of the one being discussed here). I wanted to use parametric methods in order to estimate the significance of the distribution parameters difference. But they seem to be restricted because of such a good results of fitting – Vitaly Isaev Nov 3 '13 at 20:26 • Why use such simple parametric models when you have so much data? – Glen_b Nov 3 '13 at 21:56 • Just a lack of knowledge... :) I would be appreciated if you give me advice about the modern methods of empirical distributions comparison. – Vitaly Isaev Nov 3 '13 at 22:50
{}
# Math virtual assistant About Snapxam Calculators Topics Go Premium # Factor by difference of squares Calculator ## Get detailed solutions to your math problems with our Factor by difference of squares step by step calculator. Sharpen your math skills and learn step by step with our math solver. Check out more online calculators here. Go! 1 2 3 4 5 6 7 8 9 0 x y (◻) ◻/◻ 2 e π ln log lim d/dx d/dx > < >= <= sin cos tan cot sec csc asin acos atan acot asec acsc sinh cosh tanh coth sech csch asinh acosh atanh acoth asech acsch ### Difficult Problems 1 Solved example of Factor by difference of squares $\sqrt{\frac{x^3-y^3}{x+y}\cdot\frac{x^2+2x\cdot y+y^2}{x^2+xy+y^2}}\frac{x^2-y^2}{4}$ 2 The power of a product is equal to the product of it's factors raised to the same power $\sqrt{\frac{x^3-y^3}{x+y}}\sqrt{\frac{x^2+2x\cdot y+y^2}{x^2+x\cdot y+y^2}}\cdot\frac{x^2-y^2}{4}$ 3 The power of a quotient is equal to the quotient of the power of the numerator and denominator: $\displaystyle\left(\frac{a}{b}\right)^n=\frac{a^n}{b^n}$ $\frac{\sqrt{x^3-y^3}}{\sqrt{x+y}}\cdot\frac{\sqrt{x^2+2x\cdot y+y^2}}{\sqrt{x^2+x\cdot y+y^2}}\cdot\frac{x^2-y^2}{4}$ 4 Multiplying fractions $\frac{\sqrt{x^3-y^3}\sqrt{x^2+2x\cdot y+y^2}\left(x^2-y^2\right)}{4\sqrt{x+y}\sqrt{x^2+x\cdot y+y^2}}$ 5 Factor the polynomial by $-1$ $\frac{-\sqrt{x^3-y^3}\sqrt{x^2+2x\cdot y+y^2}\left(-x^2+y^2\right)}{4\sqrt{x+y}\sqrt{x^2+x\cdot y+y^2}}$ 6 Apply the formula: $\frac{a\cdot b}{c\cdot f}$$=\frac{a}{c}\cdot\frac{b}{f}$, where $a=-1$, $b=\sqrt{x^3-y^3}\sqrt{x^2+2x\cdot y+y^2}\left(-x^2+y^2\right)$, $c=4$ and $f=\sqrt{x+y}\sqrt{x^2+x\cdot y+y^2}$ $-\frac{1}{4}\left(\frac{\sqrt{x^3-y^3}\sqrt{x^2+2x\cdot y+y^2}\left(-x^2+y^2\right)}{\sqrt{x+y}\sqrt{x^2+x\cdot y+y^2}}\right)$ 7 Factor the polynomial by $-1$ $-\frac{1}{4}\left(\frac{-\sqrt{x^3-y^3}\sqrt{x^2+2x\cdot y+y^2}\left(-y^2+x^2\right)}{\sqrt{x+y}\sqrt{x^2+x\cdot y+y^2}}\right)$ ### Struggling with math? Access detailed step by step solutions to millions of problems, growing every day!
{}
Volume Of Paraboloid Consider the horizontal square cross section of a cube through its center. ) Find the volume of the solid bounded above by the paraboloid z = 4 — m2 — y" and below by the region bounded by the two circles 2:2 + y" = 1 and m2 + 3*;2 = 4 in the first quadrant. The angular dependence is identical to that for Rutherford scattering. Find the area of the surface S which is part of the paraboloid z = x^2+ y^2 and cut off by the plane z=4. 1/3πhr^2 but I''ll write rr instead of r^2 to mean "r squared", so 1/3πhrrTruncated cone volume is volume of entire cone minus volume of cone part chopped off. This content was COPIED from BrainMass. We can take any parabola that may be symmetric about x-axis, y-axi. n = 400, Δx = Δy = 0. meter), lateral and surface area have this unit squared (e. Volume of an Elliptic Paraboloid Consider an elliptic paraboloid as shown below, part (a): At $$z=h$$ the cross-section is an ellipse whose semi-mnajor and semi-minor axes are, respectively, $$u$$ and $$v$$. The volume of the paraboloid is given by 1 2πr 2h. The one doubly curved shell that cuts costs through easier forming is the hyperbolic paraboloid. Note that the surface S consists of a portion of the paraboloid z = x2 +y2 and a portion of the plane z = 4. A quadratic paraboloid (b=1) would generate a straight line if height were plotted against radius squared, while a cubic paraboloid (b=0. Hyperbolic Paraboloid. 4 = 10 - 3x² - 3y². A hyperboloid of one sheet is the surface obtained by revolving a hyperbola around its minor axis. and i dont know what the other limits would be (y1,y2 and x1, x2?). \) Solution. A hyperbolic paraboloid is an infinite surface in three dimensions with hyperbolic and parabolic cross-sections. At the level $$d$$ above the $$x$$-axis, the cross-section of $$H$$ is a circle of radius $$\displaystyle \frac{a}{b}\sqrt{b^{2}+d. The differential cross section for scattering by a perfectly elastic, impenetrable paraboloid of revolution is obtained. Note that the surface S consists of a portion of the paraboloid z = x2 +y2 and a portion of the plane z = 4. Ask Question Asked 2 years, 8 months ago. Shoreline erosion and flooding, storm surges, tsunamis, and extreme waves often impose significant social and. Example 1: An ellipsoid whose radius and its axes are a= 21 cm, b= 15 cm and c = 2 cm respectively. Find the volume of the solid that lies between the paraboloid z = x 2 +y 2 and the sphere x 2 +y 2 +z 2 = 2 using: 1) cylindrical coordinate system 2) spherical coordinate system. paraboloid The equation for a circular paraboloid is x 2/a 2 + y 2/b2 = z. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. Finding the volume of a solid under a paraboloid and above a given triangle. In this case the variable that isn't squared determines the axis upon which the paraboloid opens up. find the volume of the region bounded above by the paraboloid z=11-x^2-y^2 and below by the paraboloid z=10x^2+10y^2. Y1 - 2010/1/25. If a = b, intersections of the surface with planes parallel to and above the xy plane produce circles, and the figure generated is the paraboloid of revolution. Ike Bro ovski problem. Cross sections along the central axis are circular. use double integral to find volume of the solid bounded by the paraboloid z=x^2+y^2 above, xy plane below, laterally by circular cylinder x^2 +(y-1)^2 = 1 So, I broke it above and below y-axis, and used polar: r varies from 0 to 2sin(theta) and theta varies from 0 to pi. Find the volume of solid S that is bounded by elliptic paraboloid x^2+2y^2+z=16, planes x=2 and y=2 and the three coordinate planes. This says that for a paraboloid form based on the equation r = R*SQRT(h/H), 25% of the volume occurs in the first 50% of the height of the paraboloid when starting at the apex and moving toward the base. Here is the equation of a hyperbolic paraboloid. In fluid mechanics and thermodynamics , a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. References. The intersection of this plane with the paraboloid has equation. The applet was created with LiveGraphics3D. Enclosed by the paraboloid z= x^2 +3y^2 and the planes x=0, y=1, y=x, and z=0. Multivariable Calculus: Using a triple integral, find the volume of the region in three space bounded by the plane z=4 and the paraboloid z = x^2 + y^2. Problem 3 Let S be the boundary of the solid bounded by the paraboloid z = x2 +y2 and the plane z = 4, with outward orientation. Evaluate 11 2 0 cos( ) x ³³ y dydx. Within that region, (i. x y z Solution. Find the volume of the solid bounded above by the plane , below by the x, y-plane, and on the sides by and. We can try doing it by slicing in the z-direction. Paraboloid Volume Problem: The region in Quadrant I under the graph of is rotated about the -axis to form a solid paraboloid. The elliptic paraboloid is!!!! It requires 6 points so 6 centroids at least are needed. View E of figure 2-41 illustrates this antenna. 66) would generate a straight line if height were plotted against radius cubed. Example 2: Set up a triple integral for the volume of the solid. Both the National Curve Bank Project and the Agnasi website have been moved. This review work attempts to organize and summarize the extensive published literature on the basic achievements in investigations of thin-walled structures in the form of elliptic paraboloids. Denote the solid bounded by the surface and two planes \(y=\pm h$$ by $$H$$. The angular dependence is identical to that for Rutherford scattering. Verify Stokes' theorem for the case in which S is the portion of the upper sheet of the hyperbolic paraboloid. The applet was created with LiveGraphics3D. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation For c>0, this is a hyperbolic paraboloid that opens down along the x-axis and up along the y-axis. Answer to: 1. n = 400, Δx = Δy = 0. If you have updated information about any of the organizations listed, please contact us. Problem 3 Let S be the boundary of the solid bounded by the paraboloid z = x2 +y2 and the plane z = 4, with outward orientation. First we investigate intersection of the two surfaces. These values will also affects the direction of the opening, either towards the positive side of the axis or the other way round. Those two guys intersect at z=1, directly above the circle (on the x-y plane) x 2 + y 2 = 1, and at the origin. In Exercise (18)-(21), nd the volume of the given soloid. Calculate the volume of the solid bounded by the paraboloid z = 2−x2 − y2 and the conic surface z = √x2 +y2. The Definite Integral and its Applications » Part B: Second Fundamental Theorem, Areas, Volumes » Session 59: Volume of a Parabaloid, Revolving About y-axis Session 59: Volume of a Parabaloid, Revolving About y-axis. Metzger proposed that a tree bole should be similar to a cubic paraboloid. Volume of the paraboloidal bowl with height h, the semi-axes of the ellipse at the summit being a and b (): (half of the circumscribed cylinder). Find the volume of the solid enclosed by the paraboloids z= x2+y2 and z= 36 23x2 3y: 6. Schonbrich, “ Analysis of Hyperbolic Paraboloid Shells”, Concrete Thin Shells , ACI Special Publication,SP-28,1971 Google Scholar. Paraboloid Volume Problem: The region in Quadrant I under the graph of is rotated about the -axis to form a solid paraboloid. Volume of a Paraboloid via Disks by MIT / David Jerison does not currently have a detailed description and video lecture title. The use of reinforced concrete in the hyperbolic paraboloid offers the same. We can take any parabola that may be symmetric about x-axis, y-axi. Consider half a parabola where the interval of is. This allows us to use a paraboloid frustum where that form appears more appropriate than a cone. Rao Pages 240-244. AU - Jung, Inhwa. (a) Find the volume of the region E that lies between the paraboloid $z = 24 - x^2 - y^2$ and the cone $z = 2 \sqrt{x^2 + y^2}$. Similarity solution for oblique water entry of an expanding paraboloid - Volume 745 - G. Volume of a solid under a paraboloid. volume of solid obtained by rotating about the x-axis the region under the curve from to eg3. The student should be very attentive to instruction on learning graphing techniques. A reflecting off-axis paraboloid is frequently used either to collimate the light from a point source or to concentrate in a point the light from a collimated beam. Higher volume of admitted OHCA patients was associated with decreased odds of good neurologic recovery (adjusted odds ratio per 10 patients 0. it follows that Rutherford scattering of particles of a particular energy is equivalent to scattering from a particular paraboloid of revolution. The Definite Integral and its Applications » Part B: Second Fundamental Theorem, Areas, Volumes » Session 59: Volume of a Parabaloid, Revolving About y-axis Session 59: Volume of a Parabaloid, Revolving About y-axis. The elliptic paraboloids can be defined as the surfaces generated by the translation of a parabola (here with parameter p ) along a parabola in the same direction (here with parameter q ) (they are. It's not too complicated to integrate dual-paraboloid reflections into an engine/framework. In this position, the hyperbolic paraboloid opens downward along the x-axis and upward along the y-axis (that is, the parabola in the plane x = 0 opens upward and the parabola in. In fluid mechanics and thermodynamics , a control volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. LammettHash. My opinion is that it is well suited to longer log lengths but may over estimate volume in short logs. Modern calculus texts will have extensive material on the quadric surfaces. com Knockmeenagh Road, Newlands Cross, Clondalkin, D22 AC98 Tel: +353 1 4593471 Fax: +353 1 4591093 Email: [email protected] nationalcurvebank. square meter), the volume has this unit to the power of three (e. Find the volume of the solid bounded above by the. Ask Question Asked 2 years, 8 months ago. Using triple integral, I need to find the volume of the solid region in the first octant enclosed by the circular cylinder r=2, bounded above by z = 13 - r^2 a circular paraboloid, and bounded. Hyperbolic paraboloid is also called as saddle due to its shape. Integrate over the solid S in the first octant bounded above by the paraboloid - , below by the xy-plane, and on the sides by the planes and Example8. Processing V = 1/2 * pi * "b" ^2 * "a" . Multivariable Calculus: Using a triple integral, find the volume of the region in three space bounded by the plane z=4 and the paraboloid z = x^2 + y^2. (b) Find the centroid of $E$ (the center of mass in the case where the density is constant). Candela, "General Formulas for Membrane Stresses in Hyperbolic Paraboloid Shells", ACI Journal, Proceeding,Vol. (a) Find the volume of the region E that lies between the paraboloid $z = 24 - x^2 - y^2$ and the cone $z = 2 \sqrt{x^2 + y^2}$. (a) (20 pts. Volume of the paraboloidic bowl with height h, the radius of the circle at the summit being R (): (half of the circumscribed cylinder). Sketch and CLEARLY LABEL the region of integration. The synaptic pedicle is unusually large and separated from the nuclear zone by a narrow. Use Shift to zoom. Problem 3 Let S be the boundary of the solid bounded by the paraboloid z = x2 +y2 and the plane z = 4, with outward orientation. A hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. Those two guys intersect at z=1, directly above the circle (on the x-y plane) x 2 + y 2 = 1, and at the origin. To view the rest of this content please follow the download PDF link above. \) Solution. The use of reinforced concrete in the hyperbolic paraboloid offers the same. The paraboloid has equation y=c(x^2+z^2) (where z is the axis coming out of the page) and is a surface of revolution about the y axis of the curve y=cx^2. paraboloid pad: yrjÖ kukkapuro's unusual helsinki home By Florencia Colombo In a secluded area of woodland in southern Finland stands a building that defies simple geometrical description — the closest approximation to a definition might be an "asymmetric hyperbolic paraboloid groin vault" — and that challenges any formal or. x² + y² = 4 = 2², whose area is 4π, so the volume is 8π. Find the volume of the solid enclosed by the paraboloid $$z=2+x^2+(y-2)^2$$ and the planes z = 1, x = 1, x = -1, y = 0, and y = 4. Somehow, the opening can be circular sometimes, depending on the values of a, b and c. Un paraboloid este eliptic dacă secțiunile perpendiculare pe axa sa de simetrie sunt elipse. It asks for the volume between the paraboloid z = x^2 + y^2 and the sphere x^2 + y^2 + z^2 = 2. A question I came across in Calc. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. find the volume of the region bounded above by the paraboloid z=11-x^2-y^2 and below by the paraboloid z=10x^2+10y^2. Evaluate 11 2 0 cos( ) x ³³ y dydx. First we investigate intersection of the two surfaces. Find the volume of the region bounded above by the paraboloid z = x2 + y2 and below by the triangle enclosed by the lines y = x, x=0, and x + y = 2 in the xy-plane, The volume under the paraboloid is (Type a simplified fraction). 25in} \Rightarrow \hspace{0. By continuing to use this site you consent to the use of cookies on your device as described in our cookie policy unless you have disabled them. I set up the integral to be (x^2+3y^2)dxdy, (1,?) and (0,y) What else do evaluate the outside integral by?. Hyperbolic Paraboloid. As mentioned elsewhere the volume of the elliptic paraboloid is a bit tricky. Especially noticeable on simple objects (spheres, cubes, planes, etc. Thābit ibn Qurra Al-Ṣābiʾ Thābit ibn Qurrah al-Ḥarrānī ( ثابت بن قره , Thebit/Thebith/Tebit ; 826 or 836 - February 18, 901) was a Sabian mathematician, physician, astronomer, and translator who lived in Baghdad in the second half of the ninth century during the time of the Abbasid Caliphate. Get an answer for 'Find the volume of the region bounded by the elliptic paraboloid z = 4 - x^2 -1/4y^2 and the plane z = 0?' and find homework help for other Math questions at eNotes. (15 pts) Given: 2 2 0 ( , ) y y ³³f x y dxdy a. Solution: Volume of ellipsoid: V = 4/3 × π × a × b × c V = 4/3 × π × 21 × 15 × 2 V = 2640 cm 3 Example 2: The ellipsoid whose radii are given as r 1 = 9 cm, r 2 = 6 cm and r 3 = 3 cm. Calculations at a paraboloid of revolution (an elliptic paraboloid with a circle as top surface). The main issue is correctly handling the seam where the two maps meet. This is defined by a parabolic segment based on a parabola of the form y=sx² in the interval x ∈ [ -a ; a ], that rotates around its height. Those two guys intersect at z=1, directly above the circle (on the x-y plane) x 2 + y 2 = 1, and at the origin. calculate the volume enclosed by the paraboloid z = x 2 + y 2 and the plane z = 10, using double integral in cartesian coordinate system. Use polar coordinates to find the volume of the given solid. Especially noticeable on simple objects (spheres, cubes, planes, etc. Rao Pages 240-244. MIT OpenCourseWare 33,884 views. It's not too complicated to integrate dual-paraboloid reflections into an engine/framework. Evaluate 11 2 0 cos( ) x ³³ y dydx. AU - Rogers, John A. 2% of large-end cross-sectional area and ≤5. Question 1114351: A satellite dish is shaped like a paraboloid of revolution. The vertex of the paraboloid is at (0, 0, 10). x² + y² = 2. View B of figure 2-41 shows a horizontally truncated, or vertically shortened, paraboloid. "As an origami pattern, it has structural bistability which could be harnessed for metamaterials used in energy trapping or other. If $$c$$ is positive then it opens up and if $$c$$ is negative then it opens down. Enter the shape parameter s (s>0, normal parabola s=1) and the maximal input value a (equivalent to the radius) and choose the. the limits for the first integral dz would be z=1 and z=0. Higher volume of admitted OHCA patients was associated with decreased odds of good neurologic recovery (adjusted odds ratio per 10 patients 0. Paraboloid, an open surface generated by rotating a parabola (q. Otherwise you can apply the Guldino theorem for the Volume of a rotating function :. Question: Find the volume of the solid enclosed by the paraboloid {eq}z = 3 + x^2 + (y - 2)^2 {/eq} and the planes {eq}z = 1, \ x = -2,\ x = 2,\ y = 0, \text{ and } y. Find the volume of the region below the hyperbolic paraboloid and above the region R. The differential cross section for scattering by a perfectly elastic, impenetrable paraboloid of revolution is obtained. There are two kinds of paraboloids: elliptic and hyperbolic. Since and (assuming is nonnegative), we have Solving, we have Since we have Therefore So the intersection of these two surfaces is a circle of radius in the plane The cone is the lower bound for and the paraboloid is the upper bound. By continuing to use this site you consent to the use of cookies on your device as described in our cookie policy unless you have disabled them. It has an elliptical opening. 7% of total length. Typical volume flow rate units are gallons per minute. Use Shift to zoom. An elliptic paraboloid is shaped like an oval cup and has a maximum or minimum point when its axis is vertical. Consolidated Pumps Ltd Knockmeenagh Road, Newlands Cross, Clondalkin, D22 AC98 Tel: +353 1 4593471 Fax: +353 1 4591093 Email: [email protected] 4, PP 353–371, 1960 Google Scholar 10. Using triple integral, I need to find the volume of the solid region in the first octant enclosed by the circular cylinder r=2, bounded above by z = 13 - r^2 a circular paraboloid, and bounded. I'm studying Volume and multiple integrals theory. Description of the hyperbolic paraboloid with interactive graphics that illustrate cross sections and the effect of changing parameters. This shape has been traditionally recommended for determining the cubic volume of of logs. Hyperbolic Paraboloid. v = [1/(b+1)] PI/4 d0^2 l where v = volume d = diameter at base l = length from base to tip and b = a constant which varies with shape, viz. of solid that lies under d paraboloid z=x^2+y^2 above the XY plane n inside d cylinder x^+y^2=2x? It comes under the chapter multiple integrals. The shape parameter has no unit, radius a and height have the same unit (e. AU - Rogers, John A. (Parabaloid of revolution) Determine the shape assumed by the surface of a liquid being spun in a circular bowl at constant angular velocity, W. It follows that Rutherford scattering of particles of a particular energy is equivalent to scattering from a particular paraboloid of revolution. Aravindan, P. Let Ube the solid enclosed by the paraboloids z= x2 +y2 and z= 8 (x2 +y2). Go ahead and login, it'll take only a minute. I'm studying Volume and multiple integrals theory. Hyperbolic paraboloid definition is - a saddle-shaped quadric surface whose sections by planes parallel to one coordinate plane are hyperbolas while those sections by planes parallel to the other two are parabolas if proper orientation of the coordinate axes is assumed. There are two kinds of paraboloids: elliptic and hyperbolic. According to the given information, it is required to find the volume of the solid bounded by the paraboloid and below the region bounded by two circles. Comparing with the volume the cylinder, ${V}_ {cylinder} = \pi r^2 h$, the volume of the paraboloid is half the volume of the cylinder. Continue reading. paraboloid The equation for a circular paraboloid is x 2/a 2 + y 2/b2 = z. Volume of a Paraboloid of Revolution. 3 has me confused. The hyperbolic paraboloid is a ruled surface, which means that you can create it using only straight lines even though it is curved. The coefficients of the first fundamental form are. Discover Resources. Volume 21, Issue 6, June 1981. 4 = 10 - 3x² - 3y². The plane z = 4 provides a "floor" for the solid. The volume of the paraboloid is given by 1 2πr 2h. Such differences are negligible given the variety of CWM shapes and practical measurement challenges. Volume of ball with radius s6. n = 49, Δx = Δy = 1. find the volume of the region bounded above by the paraboloid z=11-x^2-y^2 and below by the paraboloid z=10x^2+10y^2. ( ) 2 2 2 2 2 The solid that is the common interior be low the sphere 80 1 and above the paraboloid 2 x y z z x y + + = = + 14. There are more complicated shapes called "paraboloid", but the circular form must be the one meant due to the comparison to the. The one doubly curved shell that cuts costs through easier forming is the hyperbolic paraboloid. 4, PP 353–371, 1960 Google Scholar 10. Verify Stokes' theorem for the case in which S is the portion of the upper sheet of the hyperbolic paraboloid. The appropriate final rotating surface shape is a paraboloid of radius 7 inches and depth 8 inches. MATLAB Central contributions by Tam Nguyen. (5 points) Find the volume of the solid that lies under the hyper olic paraboloid and above the square R [—1, 1) x [0, 2] 4 -X d 3 cíx 12. Bibliographic Data J Elliptic Parabol Equ 1 volume per year, 2 issues per volume approx. Paraboloid shapes can be further broken down into quadratic and cubic paraboloids. Denote the solid bounded by the surface and two planes $$y=\pm h$$ by $$H$$. 1/3πhr^2 but I''ll write rr instead of r^2 to mean "r squared", so 1/3πhrrTruncated cone volume is volume of entire cone minus volume of cone part chopped off. To view the rest of this content please follow the download PDF link above. A hyperboloid of one sheet is the surface obtained by revolving a hyperbola around its minor axis. Integrate over the solid S in the first octant bounded above by the paraboloid – , below by the xy-plane, and on the sides by the planes and Example8. Homework Statement evaluate volume of paraboloid z = x 2 + y 2 between the planes z=0 and z=1 The Attempt at a Solution i figured we would need to rearrange so that F(x,y,z) = x 2 + y 2 - z then do a triple integral dxdydz of the function F. Processing. Find the volume of the solid under the paraboloid z=5x^2+9y^2+6 and above the region in the xy-plane bounded by y=x, x=y^2–y. Triple Integrals in Cylindrical or Spherical Coordinates 1. com - View the original, and get the already-completed solution here! (Parabaloid of revolution) Determine the shape assumed by the surface of a liquid being spun in a circular bowl at constant angular velocity, W. In a suitable coordinate system with three axes, , and, it can be represented by the equation where and are constants that dictate the level of curvature in the - and - planes. F5=[-z*y,z*x,x^2+y^2] F5 = [ -y*z, x*z, x^2 + y^2] The Connection with Area. This allows us to use a paraboloid frustum where that form appears more appropriate than a cone. Show step by step solutions for the following questions otherwise no credit will be awarded. Estimate volume of the solid that lies above the square and below the elliptic paraboloid - B. 25in}{x^2} + {z^2} = 4\]. The applet was created with LiveGraphics3D. Verify Stokes' theorem for the case in which S is the portion of the upper sheet of the hyperbolic paraboloid. Fischer, G. it follows that Rutherford scattering of particles of a particular energy is equivalent to scattering from a particular paraboloid of revolution. Volume of a Paraboloid via Disks by MIT / David Jerison does not currently have a detailed description and video lecture title. 00), but this association was not. 25in}{x^2} + {z^2} = 4\]. Such differences are negligible given the variety of CWM shapes and practical measurement challenges. There are more complicated shapes called "paraboloid", but the circular form must be the one meant due to the comparison to the. At we have the base of the paraboloid, which is a circle. A hyperbolic paraboloid (not to be confused with a hyperboloid) is a doubly ruled surface shaped like a saddle. Irregular shape: ball, ellipsoid, cone, paraboloid, hyperboloid The dummy rules! eg1. Ask Question Asked 2 years, 8 months ago. com Knockmeenagh Road, Newlands Cross, Clondalkin, D22 AC98 Tel: +353 1 4593471 Fax: +353 1 4591093 Email: [email protected] As mentioned elsewhere the volume of the elliptic paraboloid is a bit tricky. Find the volume of the solid enclosed by the cylinder x^2+y^2=4, bounded above by the paraboloid z=x^2+y^2, and bounded below by the xy-plane. xdV, where V is bounded by the paraboloid x= 4y 2 + 4z 2 and the plane x= 4. Integration in cylindrical coordinates (r, \theta, z) is a simple extension of polar coordinates from two to three dimensions. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. Internationally published Pinup covergirl. Consider the surface z = x 2 - y 2 a. Volume: 04 Issue: 02 Finite Element Analysis Of Hyperbolic Paraboloid Shell By Using ANSYS 1 S. use double integral to find volume of the solid bounded by the paraboloid z=x^2+y^2 above, xy plane below, laterally by circular cylinder x^2 +(y-1)^2 = 1 So, I broke it above and below y-axis, and used polar: r varies from 0 to 2sin(theta) and theta varies from 0 to pi. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. Paraboloid Calculator. The other region is inside both the cylinder and the paraboloid, and above the plane. Paraboloid surface paraboloid (pă-rab -ŏ-loid) A curved surface formed by the rotation of a parabola about its axis. I set up the integral to be (x^2+3y^2)dxdy, (1,?) and (0,y) What else do evaluate the outside integral by?. Jump to Content Jump to Main Navigation. Ike Bro ovski problem. If $$c$$ is positive then it opens up and if $$c$$ is negative then it opens down. Problem 3 Let S be the boundary of the solid bounded by the paraboloid z = x2 +y2 and the plane z = 4, with outward orientation. It also includes the Schwarzchilds approximations (which can be used to calculate one rigorous propagation of light waves in. The use of reinforced concrete in the hyperbolic paraboloid offers the same. hyperbolic paraboloid shell roof,gabled hyperbolic paraboloid shell roof,hipped hyperbolic paraboloid shell roof etc. of solid that lies under d paraboloid z=x^2+y^2 above the XY plane n inside d cylinder x^+y^2=2x? It comes under the chapter multiple integrals. The one doubly curved shell that cuts costs through easier forming is the hyperbolic paraboloid. The octants are labeled I through VIII, so. Last updated May 28,. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. It has an elliptical opening. In this case the variable that isn't squared determines the axis upon which the paraboloid opens up. (10 pts) Set up but DO NOT EVALUATE a multiple integral to find the volume of the solid that lies under the paraboloid z x y 224 and above the rectangle R u>0,2 1,[email protected] > @. If you have watched this lecture and know what it is about, particularly what Mathematics topics are discussed,. If $$c$$ is positive then it opens up and if $$c$$ is negative then it opens down. F5=[-z*y,z*x,x^2+y^2] F5 = [ -y*z, x*z, x^2 + y^2] The Connection with Area. Here is the equation of a hyperbolic paraboloid. A reflecting off-axis paraboloid is frequently used either to collimate the light from a point source or to concentrate in a point the light from a collimated beam. Published in: IEEE Transactions on Neural Networks and Learning Systems ( Volume: 30 , Issue: 1 , Jan. March 8, 2009 at 12:15 AM. Volume of a paraboloid (Archimedes) The region bounded by the parabola y=a x^{2} and the horizontal line y=h is revolved about the y -axis to generate a solid …. -2sxs2,-2 sys 2. (Parabaloid of revolution) Determine the shape assumed by the surface of a liquid being spun in a circular bowl at constant angular velocity, W. Sketch the region. The intersections of the. It has an elliptical opening. I need to show work so please be as descriptive as possible, it wouldn't hurt if I actually learned something too. Sketch the region. To view the rest of this content please follow the download PDF link above. Such differences are negligible given the variety of CWM shapes and practical measurement challenges. Multivariable Calculus: Using a triple integral, find the volume of the region in three space bounded by the plane z=4 and the paraboloid z = x^2 + y^2. n = 25, Δx = Δy = 1. Proof as requested by earlier reader: volume of cone is 1/3 volume of cylinder, i. Calculus: Apr 1, 2012 [SOLVED] Double Integrals - Volume Between Paraboloids: Calculus: Apr 11, 2010: Question to do with volume of a solid between a paraboloid and a plane: Calculus: Jan 26, 2010. Processing. 9: Volume of a Solid by Plane Slicing Period: Date: Practice Exercises Score: / 5 Points 1. Like you said, I believe GTA4 used DP reflections. MIT OpenCourseWare 33,884 views. Estimate volume of the solid that lies above the square and below the elliptic paraboloid - B. Suppose that the density of the object is given by f(x,y,z)=8+x+y. Paraboloid, an open surface generated by rotating a parabola (q. Similarity solution for oblique water entry of an expanding paraboloid - Volume 745 - G. Get an answer for 'Find the volume of the region bounded by the elliptic paraboloid z = 4 - x^2 -1/4y^2 and the plane z = 0?' and find homework help for other Math questions at eNotes. 7% of total length. Contact Us. Ice cream problem. ( answer is 32/3 pi) I need clearer explanation!. Hyperbolic Paraboloid. Volume in cylindrical coordinates | MIT 18. Example 2: Set up a triple integral for the volume of the solid. The other region is inside both the cylinder and the paraboloid, and above the plane. Find the volume of the solid enclosed by the paraboloids z= x2+y2 and z= 36 23x2 3y: 6. That one is outside the cyinder, inside the paraboloid, and above the plane. The Volume of Paraboloidcalculator computes Paraboloid the volume of revolution of a parabola around an axis of length (a) of a width of (b). We are to find the volume of a solid generated by revolving the region bounded by the parabola $$y^{2}=2px$$ $$(p\gt 0)$$ and $$x=c$$ $$(c\gt 0)$$ about the $$x$$-axis. How to Integrate in Cylindrical Coordinates. PY - 2010/1/25. The volume of the solid, V = ∬ D z d A, where, z is the given function. Volume of a Hyperboloid of One Sheet. In Exercise (18)-(21), nd the volume of the given soloid. The differential cross section for scattering by a perfectly elastic, impenetrable paraboloid of revolution is obtained. A quadratic surface given by the equation x^2+2rz=0. "The hyperbolic paraboloid is a striking pattern that has been used in architectural designs the world over," said Glaucio Paulino, a professor in the Georgia Tech School of Civil and Environmental Engineering. Measurement. Volume of the paraboloidic bowl with height h, the radius of the circle at the summit being R (): (half of the circumscribed cylinder). The following sections give brief descriptions of each spreadsheet. A reflecting off-axis paraboloid is frequently used either to collimate the light from a point source or to concentrate in a point the light from a collimated beam. 2) solve using double integration of polar coordinate. Solution: Let S1 be the part of the paraboloid z = x2 + y2 that lies. Evaluate 11 2 0 cos( ) x ³³ y dydx. use double integral to find volume of the solid bounded by the paraboloid & cylinder: Calculus: May 6, 2014: Find the volume bounded by the paraboloid. ( answer is 32/3 pi) I need clearer explanation!. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation: 896. (a) Find the volume of the region E that lies between the paraboloid $z = 24 - x^2 - y^2$ and the cone $z = 2 \sqrt{x^2 + y^2}$. 7% of total length. Consider the horizontal square cross section of a cube through its center. Find the volume of the solid obtained by rotating the region bounded by. solved#1853714 - Question: Find the volume of the region bounded above by the paraboloid z = 4×2 +3y2 and below by the squar… Show transcribed effigy text Find the tome of the part limited over by the paraboloid z = 4×2 +3y2 and adown by the clear R. Enter zero for the value of the K factor for those not needed. As mentioned elsewhere the volume of the elliptic paraboloid is a bit tricky. : 0 for a cylinder 2/3 for a paraboloid (third degree) 1 for a paraboloid (second degree) 2 for a conoid 3 for a neiloid. ( answer is 32/3 pi) I need clearer explanation!. View E of figure 2-41 illustrates this antenna. Hyperbolic paraboloid definition is - a saddle-shaped quadric surface whose sections by planes parallel to one coordinate plane are hyperbolas while those sections by planes parallel to the other two are parabolas if proper orientation of the coordinate axes is assumed. Get an answer for 'Find the volume of the region bounded by the elliptic paraboloid z = 4 - x^2 -1/4y^2 and the plane z = 0?' and find homework help for other Math questions at eNotes. Sketch the region. Volume of Paraboloid ­ ­ Par­abaloid Volume = 1/2 r² h : Radius (r) Vertical height (h) Input Units ­ Volume =. Find the volume of the solid bounded above by the. Internationally published Pinup covergirl. calculate the volume enclosed by the paraboloid z = x 2 + y 2 and the plane z = 10, using double integral in cartesian coordinate system. Find the volume of the solid enclosed by the cylinder x^2+y^2=4, bounded above by the paraboloid z=x^2+y^2, and bounded below by the xy-plane. So if you haven't looked at my previous post, read it over before going on. produce differently shaped beams. "As an origami pattern, it has structural bistability which could be harnessed for metamaterials used in energy trapping or other. The angular dependence is identical to that for Rutherford scattering. Otherwise you can apply the Guldino theorem for the Volume of a rotating function :. Example 2: Set up a triple integral for the volume of the solid. Tupe, 3 Department of Civil Engineering, Deogiri Institute Of Engineering And Management Studies,. 242; Hilbert and Cohn-Vossen 1999). Then the volume of the region is given by. If you have watched this lecture and know what it is about, particularly what Mathematics topics are discussed,. At either extreme position the edges form four of the edges of a regular tetrahedron. 9% of large-end diameter, while differences in inverse taper are ≤3. Volume of a Paraboloid of Revolution. Discover Resources. Go ahead and login, it'll take only a minute. Since and (assuming is nonnegative), we have Solving, we have Since we have Therefore So the intersection of these two surfaces is a circle of radius in the plane The cone is the lower bound for and the paraboloid is the upper bound. n = 25, Δx = Δy = 1. Volume in cylindrical coordinates | MIT 18. If you have updated information about any of the organizations listed, please contact us. It has an elliptical opening. Find the volume of the region bounded by the paraboloid z = x 2 + y 2 + 4 and the planes x = 0, y = 0, z = 0, x + y = 1. In this case the variable that isn’t squared determines the axis upon which the paraboloid opens up. Answer to: 1. Paraboloid definition, a surface that can be put into a position such that its sections parallel to at least one coordinate plane are parabolas. Discover Resources. We are to find the volume of a solid generated by revolving the region bounded by the parabola $$y^{2}=2px$$ $$(p\gt 0)$$ and $$x=c$$ $$(c\gt 0)$$ about the $$x$$-axis. Not necessarily. Since and (assuming is nonnegative), we have Solving, we have Since we have Therefore So the intersection of these two surfaces is a circle of radius in the plane The cone is the lower bound for and the paraboloid is the upper bound. Volume of a Hyperboloid of One Sheet. For what values of the parameters r and h is the volume of the cup maximized? r h 4 One can envision r and h being the coordinates of a point on a circle of radius 4, thus r and h must be related by: r2 = 16 −h2. The paraboloid. 1/3 pi a b h. It is a quadratic surface which can be specified by the Cartesian equation. Is this question asking for the volume inside the paraboloid or for the volume outside of the paraboloid?. The hyperbolic paraboloid is a ruled surface, which means that you can create it using only straight lines even though it is curved. This allows us to use a paraboloid frustum where that form appears more appropriate than a cone. A hyperboloid of one sheet is the surface obtained by revolving a hyperbola around its minor axis. Find the area of the surface S which is part of the paraboloid z = x^2+ y^2 and cut off by the plane z=4. Use cylindrical coordinates. The simplest elliptic paraboloid has the equation z = x 2 + y 2. Într-un sistem de referință tridimensional cu originea în vârful paraboloidului, ecuația sa este de forma: + − =În cazul particular =, paraboloidul eliptic se numește „paraboloid circular" sau „paraboloid de rotație". Paraboloid Volume Problem: The region in Quadrant I under the graph of is rotated about the -axis to form a solid paraboloid. The elliptic paraboloid is!!!! It requires 6 points so 6 centroids at least are needed. Area and Perimeter of a Parabolic Section. x² + y² = 2. (b) Find the centroid of $E$ (the center of mass in the case where the density is constant). An elliptical paraboloid is a type of quartic surfaces. 4, PP 353-371, 1960 Google Scholar 10. Schonbrich, " Analysis of Hyperbolic Paraboloid Shells", Concrete Thin Shells , ACI Special Publication,SP-28,1971 Google Scholar. xdV, where V is bounded by the paraboloid x= 4y 2 + 4z 2 and the plane x= 4. Using this relationship and the given formula for the volume of the paraboloid. Calculate volumes of the solids and compare. Creating the depth/shadow maps is exactly the same as when we created the reflection maps with one exception. Ice cream problem. 25in} \Rightarrow \hspace{0. Integration in cylindrical coordinates (r, \theta, z) is a simple extension of polar coordinates from two to three dimensions. Volume of the paraboloidic bowl with height h, the radius of the circle at the summit being R (): (half of the circumscribed cylinder). 01SC Single Variable Calculus, Fall 2010 - Duration: 5:55. Graphing the region on the -plane, we see that it looks like Now converting the equation of the surface gives Therefore, the volume of the solid is given by the double integral. For what values of the parameters r and h is the volume of the cup maximized? r h 4 One can envision r and h being the coordinates of a point on a circle of radius 4, thus r and h must be related by: r2 = 16 −h2. Answer to: 1. Find the volume of the solid bounded by the paraboloid z = 4x^2 +4y^2 and the plane z = 36. Multivariable Calculus: Using a triple integral, find the volume of the region in three space bounded by the plane z=4 and the paraboloid z = x^2 + y^2. Expert Answer 100% (46 ratings). This shape has been traditionally recommended for determining the cubic volume of of logs. Evaluate 11 2 0 cos( ) x ³³ y dydx. org/wiki/Volume_of_the_Paraboloid?oldid=950 ". We can take any parabola that may be symmetric about x-axis, y-axi. The exact conic-paraboloid is closely approximated by Fermat's paraboloid with exponent 7/5. "As an origami pattern, it has structural bistability which could be harnessed for metamaterials used in energy trapping or other. The main issue is correctly handling the seam where the two maps meet. In this video, what we'd like to do is find the volume of a paraboloid--this one that I've drawn on the board--using what we know about Riemann sums and integrals. If the axis of the surface is the z axis and the vertex is at the origin, the intersections of the surface with planes parallel to the xz and yz planes are parabolas (see Figure, top). Find the volume of the region below the hyperbolic paraboloid and above the region R. So the Volume V =phi* (D^2)/4*h. Verify Stokes' theorem for the case in which S is the portion of the upper sheet of the hyperbolic paraboloid. Sketch the region. F5=[-z*y,z*x,x^2+y^2] F5 = [ -y*z, x*z, x^2 + y^2] The Connection with Area. Find the volume of the region bounded above by the sphere x^2+y^2+z^2=2a^2w and below by the paraboloid x^2+y^2=az where a is a positive #? Find answers now! No. Find the vol. Use the surface of revolution technique for the paraboloid. Solution: Volume of ellipsoid: V = 4/3 × π × a × b × c V = 4/3 × π × 21 × 15 × 2 V = 2640 cm 3 Example 2: The ellipsoid whose radii are given as r 1 = 9 cm, r 2 = 6 cm and r 3 = 3 cm. Bibliographic Data J Elliptic Parabol Equ 1 volume per year, 2 issues per volume approx. Since we're seldom interested in a paraboloid that included the entire trunk, we need a formula for the frustum of a paraboloid. Paraboloid - The paraboloid is a tapered shape that bows outward increasing the volume of the shape (see Figure 4). The elliptic paraboloid is!!!! It requires 6 points so 6 centroids at least are needed. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Volume of a solid under a paraboloid. ' and find. Verify Stokes' theorem for the case in which S is the portion of the upper sheet of the hyperbolic paraboloid. Integration in cylindrical coordinates (r, \theta, z) is a simple extension of polar coordinates from two to three dimensions. 4, PP 353-371, 1960 Google Scholar 10. Hint: consider a particle of liquid located at (x, y) on the surface of the liquid. com Knockmeenagh Road, Newlands Cross, Clondalkin, D22 AC98 Tel: +353 1 4593471 Fax: +353 1 4591093 Email: [email protected] x y z Solution. Step 2 First, find the volume(V1) of paraboloid and circle_1 and then find the volume(V2) of paraboloid and circle_2 and then the required volume(V) will be V2-V1. Volume of a Hyperboloid of One Sheet. Un paraboloid este eliptic dacă secțiunile perpendiculare pe axa sa de simetrie sunt elipse. By the method of double integration, we can see that the volume is the iterated integral of the form where. Calculations at a paraboloid of revolution (an elliptic paraboloid with a circle as top surface). \] In cylindrical coordinates, the volume of a solid is defined by the formula \[V = \iiint\limits_U {\rho d\rho d\varphi dz}. 2) solve using double integration of polar coordinate 3)solve using triple intergation. Calculate the volume of the solid bounded by the paraboloid $$z = 2 – {x^2} – {y^2}$$ and the conic surface $$z = \sqrt {{x^2} + {y^2}}. We can try doing it by slicing in the z-direction. use double integral to find volume of the solid bounded by the paraboloid z=x^2+y^2 above, xy plane below, laterally by circular cylinder x^2 +(y-1)^2 = 1 So, I broke it above and below y-axis, and used polar: r varies from 0 to 2sin(theta) and theta varies from 0 to pi. While doing some math, I get stuck with the shadow of intersection between plane and elliptic paraboloid (i guess) as follow: A is the region described by { z ≥ 5x 2 + 2y 2 - 4xy , z ≤ x + 2y + 1 }. How to Integrate in Cylindrical Coordinates. Enter a value for all fields. Hyperbolic paraboloid definition is - a saddle-shaped quadric surface whose sections by planes parallel to one coordinate plane are hyperbolas while those sections by planes parallel to the other two are parabolas if proper orientation of the coordinate axes is assumed. Enclosed by the paraboloid z = 3×2 + 2y2 and the planes x = 0…. Go ahead and login, it'll take only a minute. Paraboloid definition, a surface that can be put into a position such that its sections parallel to at least one coordinate plane are parabolas. Paraboloid Volume Problem: The region in Quadrant I under the graph of is rotated about the -axis to form a solid paraboloid. A couple of ways to parameterize it and write an equation are as follows: z = x 2 - y 2 or 2000, volume 158, number 13, pages 200-201). This means that it can be formed by rotating a parabola around its axis of symmetry. Candela, "General Formulas for Membrane Stresses in Hyperbolic Paraboloid Shells", ACI Journal, Proceeding,Vol. asked by AIRA on May 10, 2014; Geometry Help & Proofs. Volume of ball with radius s6. : 0 for a cylinder 2/3 for a paraboloid (third degree) 1 for a paraboloid (second degree) 2 for a conoid 3 for a neiloid. PY - 2010/1/25.$$ Solution. Discover Resources. Let Ube the solid enclosed by the paraboloids z= x2 +y2 and z= 8 (x2 +y2). The others are the hyperboloid and the flat plane. cant figure this one out. Multivariable Calculus: Using a triple integral, find the volume of the region in three space bounded by the plane z=4 and the paraboloid z = x^2 + y^2. (18) The tetrahedron enclosed by the coordinate planes and the plane 2x+. In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation: 896 = −. The formula for the volume of a frustum of a paraboloid is: V = (π h/2)(r 1 2 + r 2 2), where h = height of the frustum, r 1 is the radius of the base of the frustum, and r 2 is the radius of the top of the frustum. An elliptic paraboloid is shaped like an oval cup and has a maximum or minimum point when its axis is vertical. If you have updated information about any of the organizations listed, please contact us. Find the volume of the region bounded above by the paraboloid z = x2 + y2 and below by the triangle enclosed by the lines y = x, x=0, and x + y = 2 in the xy-plane, The volume under the paraboloid is (Type a simplified fraction). Paraboloid surface paraboloid (pă-rab -ŏ-loid) A curved surface formed by the rotation of a parabola about its axis. Paraboloid with Double Integral Volume. Formula volumului unui corp format dintr-un paraboloid eliptic. Contact Us. Calculate the volume of the solid bounded by the paraboloid $$z = 2 – {x^2} – {y^2}$$ and the conic surface $$z = \sqrt {{x^2} + {y^2}}. 01SC Single Variable Calculus, Fall 2010 - Duration: 5:55. Volume of a paraboloid (Archimedes) The region bounded by the parabola y=a x^{2} and the horizontal line y=h is revolved about the y -axis to generate a solid …. The Java applet did not load, and the above is only a static image representing one view of the applet. Integration in cylindrical coordinates (r, \theta, z) is a simple extension of polar coordinates from two to three dimensions. This work includes all parametric formulas to describe paraboloid-aspheric or aspheric-paraboloid lenses for any finite conjugated planes. Graphing the region on the -plane, we see that it looks like Now converting the equation of the surface gives Therefore, the volume of the solid is given by the double integral. ) Write ZZZ U xyzdV as an iterated integral in cylindrical coordinates. The area of an ellipse is pi a b, the volume of a cone is 1/3 basearea * h so volume of the cone is 1/3 pi a b h. Somehow, the opening can be circular sometimes, depending on the values of a, b and c. The plane z = 4 provides a "floor" for the solid. (Parabaloid of revolution) Determine the shape assumed by the surface of a liquid being spun in a circular bowl at constant angular velocity, W. Find the volume of the solid bounded above by the paraboloid z = 9-12 - y? and below by the semicircular region bounded by the y-axis and the curve r = 4 -2. Volume of a Paraboloid of Revolution. The general equation of the parabola is y proportional to x 2 although, my drawings show the paraboloid inverted, this does not affect the results. Is this question asking for the volume inside the paraboloid or for the volume outside of the paraboloid?. As mentioned elsewhere the volume of the elliptic paraboloid is a bit tricky. The hyperbolic paraboloid is a ruled surface, which means that you can create it using only straight lines even though it is curved. There are two kinds of paraboloids: elliptic and hyperbolic. that lies below the plane , and F5 is the following input cell. solved#1853714 - Question: Find the volume of the region bounded above by the paraboloid z = 4×2 +3y2 and below by the squar… Show transcribed effigy text Find the tome of the part limited over by the paraboloid z = 4×2 +3y2 and adown by the clear R. Area of this bowl:. Somehow, the opening can be circular sometimes, depending on the values of a, b and c. The others are the hyperboloid and the flat plane. x² + y² = 2. Here we shall use disk method to find volume of paraboloid as solid of revolution. Paraboloid - Volume. Such paraboloid neural networks are proven to have superior recognition accuracy in a number of applications. Please try the following URL addresses to reach the websites. The shadow R of the solid D is then the circular disc, in polar. Volume of a Paraboloid via Disks by MIT / David Jerison does not currently have a detailed description and video lecture title. This allows us to use a paraboloid frustum where that form appears more appropriate than a cone. The paraboloid has equation y=c(x^2+z^2) (where z is the axis coming out of the page) and is a surface of revolution about the y axis of the curve y=cx^2. The formula for the volume of a frustum of a paraboloid is: V = (π h/2)(r 1 2 + r 2 2), where h = height of the frustum, r 1 is the radius of the base of the frustum, and r 2 is the radius of the top of the frustum. For more tips, including examples you can use for practice, read on!. Volume of ball with radius eg2. Tupe, 3 Department of Civil Engineering, Deogiri Institute Of Engineering And Management Studies,. Example 1: An ellipsoid whose radius and its axes are a= 21 cm, b= 15 cm and c = 2 cm respectively. Paraboloid - The paraboloid is a tapered shape that bows outward increasing the volume of the shape (see Figure 4). The main issue is correctly handling the seam where the two maps meet. References. ( answer is 32/3 pi) I need clearer explanation!. According to the given information, it is required to find the volume of the solid bounded by the paraboloid and below the region bounded by two circles. Calculations at a paraboloid of revolution (an elliptic paraboloid with a circle as top surface). Hyperbolic paraboloid The hyperbolic paraboloid is a doubly ruled surface shaped like a saddle. Solve, on a digital computer and plot the streamlines. Show the volume graphically. We are to find the volume of a solid generated by revolving the region bounded by the parabola \(y^{2}=2px$$ $$(p\gt 0)$$ and $$x=c$$ $$(c\gt 0)$$ about the $$x$$-axis. The volume of a paraboloid can be comparised with the volume of a cylinder equivalent. Estimate volume of the solid that lies above the square and below the elliptic paraboloid - B. Find the vol. Created Date:. Cylinder and paraboloids Find the volume of the region bounded below by the paraboloid z = x2 + y2, laterally by the cylinder x2 + = I, and above by the paraboloid z — 55. The projection of the region onto the -plane is the circle of radius centered at the origin. We can try doing it by slicing in the z-direction. Find the volume of the solid bounded by the paraboloid z = 4x^2 +4y^2 and the plane z = 36. Consequently, a continuous heating during the melting zone displacement was obtained, which is stopped once the welding sequence is completed and the flow time function defined. Enclosed by the paraboloid z = 3×2 + 2y2 and the planes x = 0…. The main issue is correctly handling the seam where the two maps meet. Proof as requested by earlier reader: volume of cone is 1/3 volume of cylinder, i. Last time I introduced using dual-paraboloid environment mapping for reflections. The shape parameter has no unit, radius a and height have the same unit (e. Într-un sistem de referință tridimensional cu originea în vârful paraboloidului, ecuația sa este de forma: + − =În cazul particular =, paraboloidul eliptic se numește „paraboloid circular" sau „paraboloid de rotație". In a suitable coordinate system, a hyperbolic paraboloid can be represented by the equation For c>0, this is a hyperbolic paraboloid that opens down along the x-axis and up along the y-axis. Paraboloid - The paraboloid is a tapered shape that bows outward increasing the volume of the shape (see Figure 4). Evaluate 11 2 0 cos( ) x ³³ y dydx. Hyperbolic Paraboloid. In Exercise (18)-(21), nd the volume of the given soloid. Both the National Curve Bank Project and the Agnasi website have been moved. Processing. It looks like part b is just a regular double integral, but how would I approach part a? 2 comments. Each of the intermediate figures is a hyperbolic paraboloid. In cylindrical coordinates, we have dV=rdzdrd(theta), which is the volume of an infinitesimal sector between z and z+dz, r and r+dr, and theta and theta+d(theta). Find the volume of the solid bounded above by the plane , below by the x, y-plane, and on the sides by and. Answer to: 1. 7k views · View 3 Upvoters. Find the volume of the solid that lies under the hyperbolic paraboloid z x y 4 22 and above the square R u >1,1 0, [email protected] 12 pts 2. Find the volume of solid S that is bounded by elliptic paraboloid x^2+2y^2+z=16, planes x=2 and y=2 and the three coordinate planes. Is this question asking for the volume inside the paraboloid or for the volume outside of the paraboloid? Another answer on Yahoo gave the answer as the volume inside the paraboloid, but this doesn't seem right to me. While doing some math, I get stuck with the shadow of intersection between plane and elliptic paraboloid (i guess) as follow: A is the region described by { z ≥ 5x 2 + 2y 2 - 4xy , z ≤ x + 2y + 1 }. Note that the surface S consists of a portion of the paraboloid z = x2 +y2 and a portion of the plane z = 4. It's not too complicated to integrate dual-paraboloid reflections into an engine/framework. ! h=a+bx+cy+dx2+exy+fy2. Use Shift to zoom. The use of reinforced concrete in the hyperbolic paraboloid offers the same. The plane z = 4 provides a "floor" for the solid. Find the volume of the solid enclosed by the cylinder x^2+y^2=4, bounded above by the paraboloid z=x^2+y^2, and bounded below by the xy-plane. In Exercise (18)-(21), nd the volume of the given soloid. -2sxs2,-2 sys 2. This coordinate system works best when integrating cylinders or. Rate sensitivity of compressive strength of columnar-grained ice Behavior of microconcrete hyperbolic-paraboloid shell. Thābit ibn Qurra Al-Ṣābiʾ Thābit ibn Qurrah al-Ḥarrānī ( ثابت بن قره , Thebit/Thebith/Tebit ; 826 or 836 - February 18, 901) was a Sabian mathematician, physician, astronomer, and translator who lived in Baghdad in the second half of the ninth century during the time of the Abbasid Caliphate. Discover Resources. The volume of the solid, V = ∬ D z d A, where, z is the given function. I'm studying Volume and multiple integrals theory. Paraboloid surface paraboloid (pă-rab -ŏ-loid) A curved surface formed by the rotation of a parabola about its axis. Use Shift to zoom. Volume of a Paraboloid via Disks | MIT 18. and i dont know what the other limits would be (y1,y2 and x1, x2?). ) about its axis. Volume of a paraboloid (Archimedes) The region bounded by the parabola y=a x^{2} and the horizontal line y=h is revolved about the y -axis to generate a solid …. The general equation for this type of paraboloid is x 2 /a 2 + y 2 /b 2 = z. Metzger proposed that a tree bole should be similar to a cubic paraboloid. xdV, where V is bounded by the paraboloid x= 4y 2 + 4z 2 and the plane x= 4. A quadratic paraboloid (b=1) would generate a straight line if height were plotted against radius squared, while a cubic paraboloid (b=0. The general equation of the parabola is y proportional to x 2 although, my drawings show the paraboloid inverted, this does not affect the results. A paraboloid is a solid of revolution generated by rotating area under a parabola about its axis. Consolidated Pumps Ltd Knockmeenagh Road, Newlands Cross, Clondalkin, D22 AC98 Tel: +353 1 4593471 Fax: +353 1 4591093 Email: [email protected] Într-un sistem de referință tridimensional cu originea în vârful paraboloidului, ecuația sa este de forma: + − =În cazul particular =, paraboloidul eliptic se numește „paraboloid circular" sau „paraboloid de rotație". Hyperbolic paraboloid is also called as saddle due to its shape.
{}
# Homework Help: Calculus 3 Change of Variables: Jacobians 1. Apr 30, 2009 ### Wargy 1. The problem statement, all variables and given/known data Evaluate $$\int$$$$\int$$e^xy dA, where R is the region enclosed by the curves: y/x=1/2 , y/x=2, xy=1, and xy=2. 2. Relevant equations None? 3. The attempt at a solution I have the region graphed and I'm currently working on acquiring the change of variables functions in x and y. I have attempted to solve the system of equations with u=y/x and v=xy to obtain these but I'm having some trouble. If I could be pointed in the right direction I would be greatly appreciative! Thanks. 2. May 1, 2009 ### HallsofIvy You have u= y/x and v= xy so y= xu. Substitute that into v= xy: v= x2u. Now solve for x.
{}
• 13 • 18 • 19 • 27 • 10 # Structure Vector in class This topic is 2163 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts Hello, I am trying to make a class work in C++. This is what I have written so far, #include <iostream> #include <string> #include <vector> #include <fstream> using namespace std; struct userstats { string ID; int secLev; // 0, normal user (Costumer, can only interact through messages) // 1 moderator (Can delete level 0 users and see list of users) // 2 admin (Same as moderator, but right to delete moderators aswell, defines root password) - INERASABLE string name; string password; }; class userbase { private: string password; public: userbase(); static vector<userstats> users; void setPas(); // void resetdb(string uP); }; userbase::userbase() { password = "default"; } void userbase::setPas() { cout << "Enter current admin/root password: "; string pass; getline(pass); if (pass == userbase::password) { password = pass; userbase::users[1].password = pass; } else cout << "Password incorrect."; } int main() { userbase current; ifstream inFile; inFile.open("userlist.txt"); if(!inFile.good()) { current::users.push_back(); ofstream outFile("userlist.txt"); outFile << current::users(1).ID; } return 0; } This gives me a couple of errors when I try to build it, ||=== brugersystem v2, Debug ===| Line 36 |error: no matching function for call to 'getline(std::string&)'| Line 59 |error: 'current' is not a class or namespace| Line 61 |error: 'current' is not a class or namespace| ||=== Build finished: 3 errors, 0 warnings ===| I am not sure with any of them. The line 59 and 61 errors are given when I want to create a database with a "default" admin member, Line 36 I try to recieve line input from a user. This whole program is to be a sort of vague usersystem. It will check whether the .txt file containing information exists. If not, it should create the database with an default admin account. Regards, Boooke ##### Share on other sites ||=== brugersystem v2, Debug ===| Line 36 |error: no matching function for call to 'getline(std::string&) Well first off you probably want: "getline(cin, pass);" Line 59 |error: 'current' is not a class or namespace| Line 61 |error: 'current' is not a class or namespace| Secondly you're trying to access an object with the wrong operator, try "." As in "current.users.push_back();" Without the quotes obviously. Keep in mind you aren't actually pushing anything back either, so, that won't do much. Oh and you should probably use an enum for seclevel. Bleh, okay, there's actually a lot of things wrong with this whole thing. EDIT: I "partially" rewrote it for you, I honestly couldn't figure out what you were trying to do on half of it and I couldn't figure out what setPas was supposed to be doing so I just removed it. #include <iostream> #include <string> #include <vector> #include <fstream> using namespace std; enum SecurityLevel { NORMAL, MODERATOR, ADMIN }; struct User { string ID; string name; string password; SecurityLevel secLevel; }; class Database { public: Database(); Database(string rootPass); vector<User> userTable; private: string m_rootPass; }; Database::Database() { m_rootPass = "admin"; } Database::Database(string rootPass) { m_rootPass = rootPass; } int main() { Database database; ifstream file("userlist.txt"); string line; if(file.is_open()) { while(file.good()) { getline(file, line); // Do some tokenizing and assignment here } } file.close(); return 0; } ##### Share on other sites Hey, thank you, and sorry for the late answer. Yes, I know, a mess. I did use enums before, but I thought i'd optimize everything when it was all clear and functional. I was thinking, if it is allowed and somewhat accepted to make checks inside the class' constructor even before creating an class object, like this: Userbase::Userbase() { ifstream inFile("userlist.txt"); if (inFile.good()) { // Read password from first user (admin) in .txt } else rootPass = "Admin1"; inFile.close(); } Also, I seem not to be able to use push_back() with this, int main() { Userbase userbase; ifstream inFile; inFile.open("userlist.txt"); if(inFile.good()) { // ADD DATE OF MODIFICATION cout << "USERLIST FOUND, READING USERS.\n"; inFile.close(); } else { cout << "NO USERS FOUND, CREATING NEW LIST.\n"; inFile.close(); userbase.users.push_back(1, "admin", "Admin1", admin); ofstream outFile("userlist.txt"); time_t current = time(0); outFile << "created: " << current << "\n"; outFile << "mod_date: " << current << "\n\n"; outFile.close(); } return 0; } It gives me the error at line 78 (Which is the push_back() command), error: no matching function for call to 'std::vector<userstats, std::allocator<userstats> >::push_back(int, const char [6], const char [7], securityLevel)'| The vector structure looks like this, struct userstats { string ID; string name; string password; securityLevel secLev; // enum securityLevel {user, moderator, admin}; }; Btw, all the functions I have written (SetPas(), resetdb()) in the class are ideas that I have written down to remember and fully create when I have sorted this vector thing out. I have not fully written them. ##### Share on other sites ID is a string, not an int. ##### Share on other sites ... Whops! Thanks for the heads up. but the problem seem to persist with the same error except for the parameter in the error "int" changing to "const char[3]". ##### Share on other sites push_back() wants a copy of the object to be put into the container, not the components that make up the object. Give your struct a constructor and pass the relevant information to the constructor in the push_back() call. ##### Share on other sites Ah, I see. So everytime I want to create a new element in the vector I will have to copy a whole structure object to the vector? I wrote something like this, ... else { cout << "NO USERS FOUND, CREATING NEW LIST.\n"; inFile.close(); userstats buffer = {"d1", "admin", "Admin1", admin}; userbase.users.push_back(buffer); ofstream outFile("userlist.txt"); time_t current = time(0); outFile << "created: " << current << "\n"; outFile << "mod_date: " << current << "\n\n"; outFile.close(); } ... Do you know the reason for this? A security measure for the correct data to be parsed, maybe? And thank you both for the solutions. Another note: I do not recommend putting using namespace std; on top like that. You are putting all that junk into the global namespace, which isn't good practice. I'd recommend putting it inside functions instead, or at least, only do stuff like this: using std::cout; using std::cin; using std::string;
{}
## Precision and recall clarification 1 I'm reading the book Fundamentals of Machine Learning for Predictive Data Analytics by Kelleher, et al. I've come across something that I think as an error but I want to check to be sure. When explaining precision and recall the authors write: Email classification is a good application scenario in which the different information provided by precision and recall is useful. The precision value tells us how likely it is that a genuine ham email could be marked as spam and, presumably, deleted: 25% (1 − precision). Recall, on the other hand, tells us how likely it is that a spam email will be missed by the system and end up in our inbox: 33.333% (1 − recall). Precision is defined as: $$TP \over {TP + FP}$$. Thus: $$1 - precision = 1 - {TP \over TP+FP} = {FP \over TP + FP} = P(\textrm{prediction incorrect}|\textrm{prediction positive})$$ So this should give us the probability that an email marked as ham (positive prediction) is actually spam. So precision and recall in the quote above should be switched? 0 It's very likely that the authors assume that the spam class is positive, whereas you intuitively associated the ham class with positive. Both options make sense in my opinion: • the former interpretation is based on the idea that the goal of the task is to detect the spam emails, seen as the class of interest. • the later interpretation considers that the ham emails are the "good ones", the ones that we want, hence the "positive" class. There's no error when one reads the paragraph with the authors' interpretation in mind. This confusion illustrates why one should always clearly define which class is defined as positive in a binary classification problem :) It's a good observation that ham is not necessarily positive. But in this case: P(predict spam|actual ham)=${FS \over TH+FS}$. 1 - precision with spam=positive gives $FS \over TS + FS$ – snowape – 2020-06-16T08:30:47.853 From the sentence "how likely it is that a genuine ham email could be marked as spam" your interpretation makes sense, but from the context I think the authors meant it the other way around: the first conditional should be understood as P(actual ham|predict spam)=FS/(TS+FS), which is the same as 1 - precision. – Erwan – 2020-06-16T14:15:20.203 I can't quite wrap my head around interpreting the sentence that way, but maybe that's what the authors meant. Switching precision and recall seems like a simpler way. I'll mark this as solved anyway. – snowape – 2020-06-17T10:16:09.340
{}
Kattis Repeated Substrings Picture from Wikimedia Commons String analysis often arises in applications from biology and chemistry, such as the study of DNA and protein molecules. One interesting problem is to find how many substrings are repeated (at least twice) in a long string. In this problem, you will write a program to find the total number of repeated substrings in a string of at most $100\, 000$ alphabetic characters. Any unique substring that occurs more than once is counted. As an example, if the string is “aabaab”, there are 5 repeated substrings: “a”, “aa”, “aab”, “ab”, “b”. If the string is “aaaaa”, the repeated substrings are “a”, “aa”, “aaa”, “aaaa”. Note that repeated occurrences of a substring may overlap (e.g. “aaaa” in the second case). Input The input consists of at most 10 cases. The first line contains a positive integer, specifying the number of cases to follow. Each of the following line contains a nonempty string of up to $100\, 000$ alphabetic characters. Output For each line of input, output one line containing the number of unique substrings that are repeated. You may assume that the correct answer fits in a signed 32-bit integer. Sample Input 1 Sample Output 1 3 aabaab aaaaa AaAaA 5 4 5
{}
Ask Your Question # Revision history [back] ### Crypto transaction puzzle on the testnet address 2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1 here are steps to get private keys from address - according Secp256k1 - addres type p2sh could someone explain some steps from here? some questions are in code: but I will list on the begining to: 1. *how to find redeemScript? from transaction" 2. "sighash (same for both signatures) : **How to calculate sigHash?** 3. **Question: how to calculate cube roots of 1 mod p?** the three X coordinates share a property with the cube roots of 1 mod p 4. **Question : how calculate the cube roots of 1 mod n?** when this is true for some three points on secp256k1, for the cube roots of 1 mod n we want to grab the funds from 2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1o www.blockchain.com/btc-testnet/address/2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1o p2sh scriptpubkey : OP_HASH160 0x14 0x186A98FF714EF8DDE99847F6769C3913E770E172 OP_EQUAL from transaction 4c004c3f06f5b76ae3f325cfb26ff305146bda0a3f9e5662462653b41324ac4a we can tell: www.blockchain.com/btc-testnet/tx/4c004c3f06f5b76ae3f325cfb26ff305146bda0a3f9e5662462653b41324ac4a redeemScript below : how to find redeemScript? Code: 5221023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED57421033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED57452AE asm: Code: 2 0x21 0x023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 0x21 0x033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 2 OP_CHECKMULTISIG 1. this is a 2-of-2 multisig of two public keys {P1,P2} 2. we can see from the parity byte that P2 = -P1, from this we know.. 3. we must find two private keys {d1,d2}, where d1 = -d2 coordinates for P1 : x1 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 y1 = CE66AAA31BA3C747A93609B53924D8FFF549315EF352894D491DB9355FDF1528 coordinates for P2 : x2 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 y2 = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 let's take a look at the signatures signature for P1 : Code: 3045022100B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E49799702200E503CE27C5D94A3D9A164037B51FD13A67EB392FCFB4073A7EB63AE6272532801 signature for P2 : Code: 304402200A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD2402202A58D3F55356A656F2A1E65A66083B680AEC6C704093CB3A3BCD566FA7120C8A01 r1 = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 s1 = 0E503CE27C5D94A3D9A164037B51FD13A67EB392FCFB4073A7EB63AE62725328 r2 = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 s2 = 2A58D3F55356A656F2A1E65A66083B680AEC6C704093CB3A3BCD566FA7120C8A reconstruct the midstate: Code: 01000000 01 B947AB129956139E2ADF1185D384273E145AF8AF35CE55328E5032EC2832D1A7 00000000 47 52 21 023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 21 033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 52 AE FDFFFFFF 02 4023050600000000 19 76 A9 14 456B2B3D018F69A8D79CDE078C710D986F26820D 88 AC 4023050600000000 19 76 A9 14 B878B15A1FA6C940F83A28BB7ACE9A0F08AEF7CD 88 AC 00000000 01000000 sighash (same for both signatures) : How to calculate sigHash? z1 = 24917770E481E6AF860E5CBECE6C8DDA74CD7A2BE90FEC53570438F54E8E38DC when verifying the signatures ( r1 == R1_x && r2 == R2_x ), we make use of the uncompressed R point : verify(z1,x1,y1,r1,s1) R1_x = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 R1_y = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 verify(z1,x2,y2,r2,s2) R2_x = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 R2_y = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 we can see that ( r1 == R1_x && r2 == R2_x ), and we can also observe.. 1. R1_y == R2_y from this we can tell that.. 2. k1 = -k2 - the nonce used in both signatures is basically the same ! but also.. 3. R1_y == R2_y == P2_y - Both 'R' points and the second public key share the same Y coordinate !! looking at y^2 = x^3 + 7, we can see that there are 3 'x' solutions for each 'y'. we can find these three solutions for our r1_y : cube_root( R1_y^2 - 7 ) mod p sol1 = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 sol2 = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 sol3 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 Question: how to calculate cube roots of 1 mod p? the three X coordinates share a property with the cube roots of 1 mod p which are : rm1p = 1 rm2p = 7AE96A2B657C07106E64479EAC3434E99CF0497512F58995C1396C28719501EE rm3p = 851695D49A83F8EF919BB86153CBCB16630FB68AED0A766A3EC693D68E6AFA40 And really what's going on with all these points' X coordinate that we gathered is : P2_x * rm1p = P2_x mod p # trivial P2_x * rm2p = R2_x mod p P2_x * rm3p = R1_x mod p **Question : how calculate the cube roots of 1 mod n?** when this is true for some three points on secp256k1, for the cube roots of 1 mod n which are : rm1n = 1 rm2n = AC9C52B33FA3CF1F5AD9E3FD77ED9BA4A880B9FC8EC739C2E0CFC810B51283CE rm3n = 5363AD4CC05C30E0A5261C028812645A122E22EA20816678DF02967C1B23BD72 the following is also true : rm1n * P2 = P2 # trivial rm2n * P2 = R1 rm3n * P2 = R2 recall step (2): ( P2 = -P1 -> d2 = -d1 ), we now also know that {d1,d2,k1,k2} all share the same property with : k1 = d2 * rm2n % n k2 = -d1 * rm3n % n an ecdsa signature is computed like : 1/k * ( z + ( r * d ) ) = s mod n we know that : 1/k1 * ( z1 + ( r1 * d1 ) ) = s1 1/k2 * ( z1 + ( r2 * d2 ) ) = s2 k1 = d2 * rm2n k2 = -d1 * rm3n d2 = -d1 substitute k2: 1/(-d1 * rm3n) * ( z1 + ( r2 * (-d1) ) ) = s2 ## multiply by rm2n 1/d1 * ( z1 + ( r2 * (-d1) ) ) = -s2 * rm3n z1/d1 + (r2 * (-d1))/d1 = -s2 * rm3n z1/d1 - r2 = -s2 * rm3n z1/d1 = ( -s2 * rm3n ) + r2 ## "divide" by z1 we get equation that we can use to solve for d1 : 1/d1 = ( ( -s2 * rm3n ) + r2 ) * 1/z1 mod n which gives us : d1 = C3FC5135DF80FC592FD8A8A278799F6CD493CD5786858E9022475D52EE21B654 cU9fw5RaHJNuEEWRgxo7xpLVDtJNNwYnuPHKyzw1m9Z4B5C19dik d2 = 3C03AECA207F03A6D027575D87866091E61B0F8F28C311AB9D8B0139E2148AED cPbMwEBKaLTxXdqXDLGeNYyTyzepcaoARKzxL1bwvDJodd1JynPZ and now we can redeem the input at 10b1bbb7477d0736b4cadd18cf93f02a0ecd01d0e056b1ab9333aaf95ae914e1. but the puzzle says that we need to "obtain ownership of the coins", so what about the very first spend at a7d13228... ? since we had : k1 = d2 * rm2n k2 = -d1 * rm3n how about we try : from {k1, k2} we get the two keypairs : k1 = C05A50169BBE16DB798465D7FA4B4FF95BD7FD3B83057181406AD4E31491D1AB K1 = 03B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 address : mkaczxMUDgN9usu7hqpBiYKjZ6zJguFr1v k2 = 03A2011F43C2E57DB65442CA7E2E4F7378BBD01C03801D0EE1DC886FD98FE4A9 K2 = 030A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 address : mxLMDERfVDfiQdkrY7gVbiKRYupTfHgZqd the address for k1 doesn't look familiar, but mxLMDERfVDfiQdkrY7gVbiKRYupTfHgZqd is the address in the second output! maybe the spender did the same trick? k3 = -k1 mod n k3 = 3FA5AFE96441E924867B9A2805B4B0055ED6DFAB2C432EBA7F6789A9BBA46F96 K3 = 02B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 address : mmr1JWt6t3szFdRpTZ7CjLBTwAzMHnxrrP ### Crypto transaction puzzle on the testnet address 2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1 here are steps to get private keys from address - according Secp256k1 - addres type p2sh could someone explain some steps from here? some questions are in code: but I will list on the begining to: 1. *how to find redeemScript? from transaction" 2. "sighash (same for both signatures) : **How to calculate sigHash?** 3. **Question: how to calculate cube roots of 1 mod p?** the three X coordinates share a property with the cube roots of 1 mod p 4. **Question : how calculate the cube roots of 1 mod n?** when this is true for some three points on secp256k1, for the cube roots of 1 mod n we want to grab the funds from 2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1o www.blockchain.com/btc-testnet/address/2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1o www.blockchain.com/btc-testnet/address/2MuUKuRSr5sbj9HA9dDo5RS4QVMDrcnyu1o p2sh scriptpubkey : OP_HASH160 0x14 0x186A98FF714EF8DDE99847F6769C3913E770E172 OP_EQUAL from transaction 4c004c3f06f5b76ae3f325cfb26ff305146bda0a3f9e5662462653b41324ac4a we can tell: www.blockchain.com/btc-testnet/tx/4c004c3f06f5b76ae3f325cfb26ff305146bda0a3f9e5662462653b41324ac4a redeemScript below : how to find redeemScript? Code: 5221023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED57421033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED57452AE asm: Code: 2 0x21 0x023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 0x21 0x033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 2 OP_CHECKMULTISIG this is a 2-of-2 multisig of two public keys {P1,P2} we can see from the parity byte that P2 = -P1, from this we know.. we must find two private keys {d1,d2}, where d1 = -d2 coordinates for P1 : x1 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 y1 = CE66AAA31BA3C747A93609B53924D8FFF549315EF352894D491DB9355FDF1528 coordinates for P2 : x2 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 y2 = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 let's take a look at the signatures signature for P1 : Code: 3045022100B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E49799702200E503CE27C5D94A3D9A164037B51FD13A67EB392FCFB4073A7EB63AE6272532801 signature for P2 : Code: 304402200A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD2402202A58D3F55356A656F2A1E65A66083B680AEC6C704093CB3A3BCD566FA7120C8A01 r1 = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 s1 = 0E503CE27C5D94A3D9A164037B51FD13A67EB392FCFB4073A7EB63AE62725328 r2 = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 s2 = 2A58D3F55356A656F2A1E65A66083B680AEC6C704093CB3A3BCD566FA7120C8A reconstruct the midstate: Code: 01000000 01 B947AB129956139E2ADF1185D384273E145AF8AF35CE55328E5032EC2832D1A7 00000000 47 52 21 023F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 21 033F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 52 AE FDFFFFFF 02 4023050600000000 19 76 A9 14 456B2B3D018F69A8D79CDE078C710D986F26820D 88 AC 4023050600000000 19 76 A9 14 B878B15A1FA6C940F83A28BB7ACE9A0F08AEF7CD 88 AC 00000000 01000000 sighash (same for both signatures) : How to calculate sigHash? z1 = 24917770E481E6AF860E5CBECE6C8DDA74CD7A2BE90FEC53570438F54E8E38DC when verifying the signatures ( r1 == R1_x && r2 == R2_x ), we make use of the uncompressed R point : verify(z1,x1,y1,r1,s1) R1_x = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 R1_y = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 verify(z1,x2,y2,r2,s2) R2_x = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 R2_y = 3199555CE45C38B856C9F64AC6DB27000AB6CEA10CAD76B2B6E246C9A020E707 we can see that ( r1 == R1_x && r2 == R2_x ), and we can also observe.. R1_y == R2_y from this we can tell that.. k1 = -k2 - the nonce used in both signatures is basically the same ! but also.. R1_y == R2_y == P2_y - Both 'R' points and the second public key share the same Y coordinate !! looking at y^2 = x^3 + 7, we can see that there are 3 'x' solutions for each 'y'. we can find these three solutions for our r1_y : cube_root( R1_y^2 - 7 ) mod p sol1 = 0A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 sol2 = B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 sol3 = 3F3C3501D05E6151F5B483C3962251EA2113D8F5B76F58C44A4252B4580ED574 Question: how to calculate cube roots of 1 mod p? the three X coordinates share a property with the cube roots of 1 mod p which are : rm1p = 1 rm2p = 7AE96A2B657C07106E64479EAC3434E99CF0497512F58995C1396C28719501EE rm3p = 851695D49A83F8EF919BB86153CBCB16630FB68AED0A766A3EC693D68E6AFA40 And really what's going on with all these points' X coordinate that we gathered is : P2_x * rm1p = P2_x mod p # trivial P2_x * rm2p = R2_x mod p P2_x * rm3p = R1_x mod p **Question : how calculate the cube roots of 1 mod n?** when this is true for some three points on secp256k1, for the cube roots of 1 mod n which are : rm1n = 1 rm2n = AC9C52B33FA3CF1F5AD9E3FD77ED9BA4A880B9FC8EC739C2E0CFC810B51283CE rm3n = 5363AD4CC05C30E0A5261C028812645A122E22EA20816678DF02967C1B23BD72 the following is also true : rm1n * P2 = P2 # trivial rm2n * P2 = R1 rm3n * P2 = R2 recall step (2): ( P2 = -P1 -> d2 = -d1 ), we now also know that {d1,d2,k1,k2} all share the same property with : k1 = d2 * rm2n % n k2 = -d1 * rm3n % n an ecdsa signature is computed like : 1/k * ( z + ( r * d ) ) = s mod n we know that : 1/k1 * ( z1 + ( r1 * d1 ) ) = s1 1/k2 * ( z1 + ( r2 * d2 ) ) = s2 k1 = d2 * rm2n k2 = -d1 * rm3n d2 = -d1 substitute k2: 1/(-d1 * rm3n) * ( z1 + ( r2 * (-d1) ) ) = s2 ## multiply by rm2n 1/d1 * ( z1 + ( r2 * (-d1) ) ) = -s2 * rm3n z1/d1 + (r2 * (-d1))/d1 = -s2 * rm3n z1/d1 - r2 = -s2 * rm3n z1/d1 = ( -s2 * rm3n ) + r2 ## "divide" by z1 we get equation that we can use to solve for d1 : 1/d1 = ( ( -s2 * rm3n ) + r2 ) * 1/z1 mod n which gives us : d1 = C3FC5135DF80FC592FD8A8A278799F6CD493CD5786858E9022475D52EE21B654 cU9fw5RaHJNuEEWRgxo7xpLVDtJNNwYnuPHKyzw1m9Z4B5C19dik d2 = 3C03AECA207F03A6D027575D87866091E61B0F8F28C311AB9D8B0139E2148AED cPbMwEBKaLTxXdqXDLGeNYyTyzepcaoARKzxL1bwvDJodd1JynPZ and now we can redeem the input at 10b1bbb7477d0736b4cadd18cf93f02a0ecd01d0e056b1ab9333aaf95ae914e1. but the puzzle says that we need to "obtain ownership of the coins", so what about the very first spend at a7d13228... ? since we had : k1 = d2 * rm2n k2 = -d1 * rm3n how about we try : from {k1, k2} we get the two keypairs : k1 = C05A50169BBE16DB798465D7FA4B4FF95BD7FD3B83057181406AD4E31491D1AB K1 = 03B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 address : mkaczxMUDgN9usu7hqpBiYKjZ6zJguFr1v k2 = 03A2011F43C2E57DB65442CA7E2E4F7378BBD01C03801D0EE1DC886FD98FE4A9 K2 = 030A35A7B0D6A2EEE7EBD83F730DC6CC359C15515F704706C57EB8D70E59A7AD24 address : mxLMDERfVDfiQdkrY7gVbiKRYupTfHgZqd the address for k1 doesn't look familiar, but mxLMDERfVDfiQdkrY7gVbiKRYupTfHgZqd is the address in the second output! maybe the spender did the same trick? k3 = -k1 mod n k3 = 3FA5AFE96441E924867B9A2805B4B0055ED6DFAB2C432EBA7F6789A9BBA46F96 K3 = 02B68E234D58FEAFC61E733CC95C16E1E042D6D5AAD849A0763704D63C4E497997 address : mmr1JWt6t3szFdRpTZ7CjLBTwAzMHnxrrP Copyright Sage, 2010. Some rights reserved under creative commons license. Content on this site is licensed under a Creative Commons Attribution Share Alike 3.0 license. about | faq | help | privacy policy | terms of service Powered by Askbot version 0.7.59 Please note: Askbot requires javascript to work properly, please enable javascript in your browser, here is how //IE fix to hide the red margin var noscript = document.getElementsByTagName('noscript')[0]; noscript.style.padding = '0px'; noscript.style.backgroundColor = 'transparent'; askbot['urls']['mark_read_message'] = '/s/messages/markread/'; askbot['urls']['get_tags_by_wildcard'] = '/s/get-tags-by-wildcard/'; askbot['urls']['get_tag_list'] = '/s/get-tag-list/'; askbot['urls']['follow_user'] = '/followit/follow/user/{{userId}}/'; askbot['urls']['unfollow_user'] = '/followit/unfollow/user/{{userId}}/'; askbot['urls']['user_signin'] = '/account/signin/'; askbot['urls']['getEditor'] = '/s/get-editor/'; askbot['urls']['apiGetQuestions'] = '/s/api/get_questions/'; askbot['urls']['ask'] = '/questions/ask/'; askbot['urls']['questions'] = '/questions/'; askbot['settings']['groupsEnabled'] = false; askbot['settings']['static_url'] = '/m/'; askbot['settings']['minSearchWordLength'] = 4; askbot['settings']['mathjaxEnabled'] = true; askbot['settings']['sharingSuffixText'] = ''; askbot['settings']['errorPlacement'] = 'after-label'; askbot['data']['maxCommentLength'] = 800; askbot['settings']['editorType'] = 'markdown'; askbot['settings']['commentsEditorType'] = 'rich\u002Dtext'; askbot['messages']['askYourQuestion'] = 'Ask Your Question'; askbot['messages']['acceptOwnAnswer'] = 'accept or unaccept your own answer'; askbot['messages']['followQuestions'] = 'follow questions'; askbot['settings']['allowedUploadFileTypes'] = [ "jpg", "jpeg", "gif", "bmp", "png", "tiff" ]; askbot['data']['haveFlashNotifications'] = true; askbot['data']['activeTab'] = 'questions'; askbot['settings']['csrfCookieName'] = 'asksage_csrf'; askbot['data']['searchUrl'] = ''; /*<![CDATA[*/ $('.mceStatusbar').remove();//a hack to remove the tinyMCE status bar$(document).ready(function(){ // focus input on the search bar endcomment var activeTab = askbot['data']['activeTab']; if (inArray(activeTab, ['users', 'questions', 'tags', 'badges'])) { var searchInput = $('#keywords'); } else if (activeTab === 'ask') { var searchInput =$('#id_title'); } else { var searchInput = undefined; animateHashes(); } var wasScrolled = $('#scroll-mem').val(); if (searchInput && !wasScrolled) { searchInput.focus(); putCursorAtEnd(searchInput); } var haveFullTextSearchTab = inArray(activeTab, ['questions', 'badges', 'ask']); var haveUserProfilePage =$('body').hasClass('user-profile-page'); if ((haveUserProfilePage || haveFullTextSearchTab) && searchInput && searchInput.length) { var search = new FullTextSearch(); askbot['controllers'] = askbot['controllers'] || {}; askbot['controllers']['fullTextSearch'] = search; search.setSearchUrl(askbot['data']['searchUrl']); if (activeTab === 'ask') { search.setAskButtonEnabled(false); } search.decorate(searchInput); } else if (activeTab === 'tags') { var search = new TagSearch(); search.decorate(searchInput); } if (askbot['data']['userIsAdminOrMod']) { $('body').addClass('admin'); } if (askbot['settings']['groupsEnabled']) { askbot['urls']['add_group'] = "/s/add-group/"; var group_dropdown = new GroupDropdown();$('.groups-dropdown').append(group_dropdown.getElement()); } var userRep = $('#userToolsNav .reputation'); if (userRep.length) { var showPermsTrigger = new ShowPermsTrigger(); showPermsTrigger.decorate(userRep); } }); if (askbot['data']['haveFlashNotifications']) {$('#validate_email_alert').click(function(){notify.close(true)}) notify.show(); } var langNav = $('.lang-nav'); if (langNav.length) { var nav = new LangNav(); nav.decorate(langNav); } /*]]>*/ if (typeof MathJax != 'undefined') { MathJax.Hub.Config({ extensions: ["tex2jax.js"], jax: ["input/TeX","output/HTML-CSS"], tex2jax: {inlineMath: [["$","$"],["\$","\$"]]} }); } else { console.log('Could not load MathJax'); } //todo - take this out into .js file$(document).ready(function(){ $('div.revision div[id^=rev-header-]').bind('click', function(){ var revId = this.id.substr(11); toggleRev(revId); }); lanai.highlightSyntax(); }); function toggleRev(id) { var arrow =$("#rev-arrow-" + id); var visible = arrow.attr("src").indexOf("hide") > -1; if (visible) { var image_path = '/m/default/media/images/expander-arrow-show.gif?v=19'; } else { var image_path = '/m/default/media/images/expander-arrow-hide.gif?v=19'; } image_path = image_path + "?v=19"; arrow.attr("src", image_path); \$("#rev-body-" + id).slideToggle("fast"); } for (url_name in askbot['urls']){ askbot['urls'][url_name] = cleanUrl(askbot['urls'][url_name]); }
{}
# Image to tensor Takes in an image and outputs a dense tensor in a predefined format. ## Inputs image Any image. layout Layout of output tensor. outputType The data type of output tensor elements. • normalizeIntensity - A Boolean flag that enables or disables intensity range normalization. If enabled, the intensity levels of the input image are scaled by equation "out = ((in / channel_max) * max - offset) / divisor" before they are copied to the output tensor. This is done separately for each color channel. See normalizationFactors. • normalizationFactors A 3-by-3 matrix that contains the max, offset and divisor values for each color channel. Only the first row is used with grayscale images. ## Outputs tensor A tensor in the defined format. If the input is an RGB image, the output tensor will have three elements in the "c" dimension. Gray images produce just one. Use color converting to convert gray levels to RGB if needed. The "w" and "h" dimensions will match the size of the input image. Use image scaling or cropping as a preprocessing tool to fix the tensor size if required by the application. In NCHW and NHWC layouts, the batch size ("n") will be one. Supported data layouts. Enumerator ChwLayout Channels, height, width. HwcLayout Height, width, channels. NchwLayout Batch, channels, height, width. NhwcLayout Batch, height, width, channels. Supported tensor data types.
{}
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition) (a) $v = \sqrt{2gh}$ (b) The speed at the bottom is 2.4 m/s in both cases. (a) The acceleration down the ramp is $a = g~sin(\theta)$. We can find the distance $d$ the rock slides; $d = \frac{h}{sin(\theta)}$ We can find the speed at the bottom; $v^2 = v_0^2+2ad = 0 +2ad$ $v = \sqrt{2ad} = \sqrt{(2)(g~sin(\theta))(\frac{h}{sin(\theta)})}$ $v = \sqrt{2gh}$ (b) $v = \sqrt{2gh}$ $v = \sqrt{(2)(9.80~m/s^2)(0.30~m)}$ $v = 2.4~m/s$ Since the speed at the bottom does not depend on the angle, the speed at the bottom is 2.4 m/s in both cases.
{}
# How to regress two categorical variables I'm not looking for a detailed answer, just some pointers towards possible things I could read to better understand this problem. Let's say that we have a survey that asks two questions, $X$ and $Y$. How do you regress $Y$ against $X$? I know that if $Y$ is binary you can use logistic regression, but generally, how do you regress an unordered $Y$ against an unordered $X$? Ordered $Y$ against unordered $X$? Unordered $Y$ against ordered $X$? Ordered $Y$ against Ordered $X$? I'm working on some survey analysis software, and in it I attempt to predict $Y$ with $X$ using the following method: Suppose $X$ has $X_1,\cdots, X_n$ responses, and $Y$ has $Y_1,\cdots,Y_m$ responses. Then I calculate a matrix, where the $[i,j]$ element is $\mathbb{P}(Y_i|X_j)$. I then have the user enter in a hypothetical response distribution for $X$ (so new $X_1,\cdots, X_n$, call it $X_{new_1},\cdots,X_{new_n}$). Then if you multiply the matrix by this column vector, you get a new distribution for the $Y$ response variable. I'm really not sure how good this method is, and I was very careful to propagate error with each operation (each $\mathbb{P}(Y_i|X_j)$ has a confidence interval) to try not to mislead people, but I came up with this method on my own and it seems too simple. There are a number of variants of logistic regression. If your Y variable has just two levels, you can use the standard version of LR. If you have more than two levels of Y, you can use multinomial LR if they are unordered, or ordinal LR if they are ordered. The distribution of X variables does not affect the type of LR used. You can always represent categorical X variables with dummy codes (the most common type is reference level coding). If the levels of X are ordered, there are other coding schemes available, but they typically use the same number of degrees of freedom and only (in essence) display the output differently. You can also substitute real numbers for the levels (i.e., the mean of the underlying continuous variable for each level). This induces measurement error, but saves degrees of freedom. If you have some knowledge of the topic to ground your choices, the pros can outweigh the cons.
{}
# Rgb Wavelength In situations where color description. You can almost pull it off on a Geforce3 but the GeforceFX and Radeon 9700/9800 cards are advanced enough that you can pass 32 bit YCrCb textures to the video card and do all conversion calculations on the video card. The method searches the metamer space for a spectrum that best fits a set of criteria. Visible light is usually defined as having wavelengths in the range of 400-700 nanometers (nm), or 4. Watt-class output power; UV and VIS wavelengths (optional wavelength tuning) High coherence: > 100 m coherence length (< 1 MHz linewidth). Several software packages have wavelength coloring functions (e. Solving for peak emission wavelength. Features: • RGB - Full color illumination • Optimal wavelengths for color mixing • Industry standard footprint • High intensity and cost efficient. However, you can probably achieve the desired effect by converting to HSV color space and selecting a particular range of hues. This is my sixth article in C#. Wavelengths typically range from 800 nm to 1600 nm, but by far the most common wavelengths actually used in fiber optics are 850 nm, 1300 nm, and 1550 nm. A simple tool to convert a wavelength in nm to an RGB or hexadecimal colour. the amount of power emitted in a unit wavelength interval around that wavelength. OD 4 Notch Filters feature narrow rejection bands of just ±2. A web search for XYZ color or CIE 1931 will turn up more information for those who are interested. The spectra below were generated using different RGB values for wavelengths between 380nm and 780nm. Buy laser and rent laser at Laserworld Learn more about wavelengths and colors of a show laser display, white balance and RGB systems. A spectral color is composed of a single wavelength and can be correlated with wavelength as shown in the chart below (a general guide and not a precise statement about color). • Cones in the eye respond to three colors: red, green, blue – 6 to 7 million cones in human eye – 65% cones respond to red eye – 33% cones respond to green light – 2% cones respond to blue light, these being most sensitive – Red, green, and blue are known as primary colors ∗ In 1931, CIE designated specific wavelengths for primary colors. It is possible to convert quickly from wavelength to electron volts For 1 eV: V ~2. We tend to think that by mixing RGB intensities that we can generate any colour or wavelength. dominant wavelength λD = 465 nm, spectral halfwidth Δλ½ = 22 nm. $\endgroup$ - Carl Witthoft Jan 19 '14 at 17:23. Yes, there is a marginal difference in brightness ( hsb ), after conversion, there is a marginal difference in red and blue ( rgb ), please see the first half of the example below. the red wavelengths of light necessary for seed germination, the Brassica rapa seeds were germinated under the fluorescent light. • Excellent crosstalk of −19. In normal human vision, wavelengths of between 400 nm and 700 nm are represented in the circle, where reds are the longer wavelength and blues and violets are the shorter wavelengths. The true-color view from Landsat is less than half of what it sees. Your computer monitor and television use RGB. In this graph, along the X-axis, is the wavelength, and along the Y-axis is the intensity of that. This would mean the combination of intensities. The color calculator is used to freely convert among many different device-independent color spaces, including standard CIE representations (XYZ, xyY, Lab, LCHab, Luv and LCHuv), Adobe Photoshop working RGB spaces and correlated color temperature. Angle Displacement 0. This can be used to achieve a specific color on the light spectrum. the "color" of that object as it would appear to the eye. Yes, there is a marginal difference in brightness ( hsb ), after conversion, there is a marginal difference in red and blue ( rgb ), please see the first half of the example below. ImageJ, Zeiss, MetaMorph, Volocity) and I don't wish to get into a discussion of the mertis of each of their approaches, I was simply searching for software that would apply its colorization scheme to a z-stack. Screens emit light of just three different colours at approximately the following wavelengths: Of course, computer screens are not lasers, so they do not emit just a single wavelength but a broader range of wavelenghts. I got impressed with a similar article, so I tried this. They differ only in wavelength from 0. High power led packages include 5050 LEDs, 5mm LEDs, 3mm LEDs, 3528 LEDs, and 2835 LEDs. 1931 – The Commission International de l’Eclairage (CIE) Defined a standard system for color representation. Since our eyes encode colors with those components (RGB), it is a very convenient system (although certainly not the only one) to encode not only pure-wavelengths (which form a more or less deterministic combination of retinal response for each chromatic component), but also mixed colors. QBLP679E-RGB PLCC6 RGB LED ----- Product: QBLP679E-RGB Date: March 20, 2014 Page 5 of 10 Version# 2. Blue Light (400 -520 nm) needs to be carefully mixed with light in other spectra since overexposure to light in this wavelength may stunt the growth of certain plant species. For higher power applications a custom design can be done to handle up to 2W, with large core multimode fiber. wavelength). •Landsat TM/5 •Manila, Phillipines •2000/01/26 •Bands: 7,5,1 (RGB) •180 x 170km. Starting in August 2015, we begin with the targeted search and construction of a competent partner on site. (X,Z – Several Hundreds, Y – 0. The colors seem to blend into each other because the light exits at different angles, rather than one unmoving angle. Red is the color of some apples and mostly, raspberries. The wavelength of green light is about 550 nanometers (one-billionth of a meter). Our experiments with different color combinations show that a tradeoff exists between crosstalk and stripe noise. Physics Light Colour Over the course of millions of years, the human eye has evolved to detect light in the range 380—780nm, a portion of the electromagnetic spectrum known as visible light , which we perceive as colour. • RGB wavelengths can be demultiplexed after light propagation of 20 mm. Diffraction. Not only that but there can be "metamers" - spectra that are completely different but that give the same RGB values. RGB has the largest gamut of the three and RYB, the. All objects warmer than absolute zero (−273 ∘ C/−459 ∘ F) emit infrared radiation at specific wavelengths (LWIR and MWIR bands) in an amount proportional to their temperature. Hyperspectral images provide both spatial and spectral representations of scenes, materials, and sources of illumination. As a result, the contrast improves dramatically, showing a great deal of structure. 6 Encoding formats Adobe RGB color component values can be encoded using integer or floating-point encodings. Kutulakos1 Liang Shen2 1 Universityof Toronto 2 Qualcomm Canada Inc. When two spectra of monochromatic light look the same, the colors are called metamers. RGB LED SPECIFICATIONS Red Green Blue Unit Note: For applications that require white illumination, contact factory. in the case of RGB or, a Printing Industry Standard in the case of CMYK, and/or associated material type or illumination (paper, ink set, lighting, etc. RGB::fromWaveLength uses the model published by Dan Bruton for the conversion. Resolution = 300m/px. 1msec width, for long operating life, max. Enter a wavelength in nanometers between 380 and 780 and get an approximate RGB value. For higher power applications a custom design can be done to handle up to 2W, with large core multimode fiber. Blue light's wavelength is the most visible and brightest to the human eye, especially at night. The so-called "near infrared" spectrum — from about 700 nanometers (the longest wavelength red we can see with our eyes) to around 1000 nm (the longest wavelength to which our camera sensors. This is general color science, not something specific to MATLAB. The real color of the sun is white, but when viewed from Earth, the sun appears yellow because the atmosphere scatters higher wavelength colors like yellow, red, Listen to this post The real color of the sun is white. Still, the SML signal we'll see with the filters will be slightly different from the initial beam of light, thus the colors will be modified. Experiment with this RGB color mixer to get a feel for the effect of mixing the three different additive primary colors. The colors seem to blend into each other because the light exits at different angles, rather than one unmoving angle. Some creatures cannot see different colours except black and white. This method of WDM is known as Dense Wavelength Division Multiplexing, or DWDM. Suppose I wanted my monitor to display a color that appeared to my eye to be approximately the same as if I saw a light of wavelength X. wavelength range, radiometric power efficiency increases as wavelength decreases. If the ambient light is one-quarter of full intensity white, you can assign an RGB value to the light of (0. In the visible light spectrum, the wavelength of red light ranges from 622 to 780 nanometers. Item Condition Time/Cycle. Features: • RGB - Full color illumination • Optimal wavelengths for color mixing • Industry standard footprint • High intensity and cost efficient. Dominant Wavelength In order to calculate dominant wavelength, we must first introduce the identification of a color by its “x-y chromaticity coordinates” as plotted in the Chromaticity Diagram. Our Sun’s surface temperature is about 6000 K; its peak wavelength color is a slightly greenish-yellow. 9 V Power Peak Dissipation P D 100 80 80 mW Maximum Reverse Voltage V RM 5 5 5 V Dominant Wavelength λ d. We invited the prominent streamer and cosplayer, Ying Tze, to our lab since she has the same question too. has acquired the complete spectrometer business from RGB-Photonics. Can be used with the channel or light bars listed on the Accessories tab. As the temperature of a star increases, the peak of its continuous spectrum shifts to shorter (bluer) wavelengths. Basically, I am going to make a graph/plot in Matlab, for an intensity vs wavelength graph of an image of a spectrum. Relative Intensity vs. The dominant wavelength or hue of the color is the wavelength of the pure color where the line meets the edge. Buy 3W RGB LED DJ Disco Party Crystal Ball Stage Effect Light Remote Control at Walmart. This is my sixth article in C#. I don’t know if you are still active, but if you are, I’m struggling to input RGB values in, and so thus cannot use this as an RGB to Wavelength converter. Mixing colors on a palette produces tints and hues not seen as spectral colors. I ended up returning product because of lighting; the RGB lighting was very bright beneath/around the keys, but very dim under key letters. This issue is devoted to one of the most elusive aspects of understanding digital cameras--how they reproduce color. How do I convert this value to rgb? The xcolor package goes over some conversions on. Light travels through space as a wave, and the distance between two wave peaks is the wavelength of that beam of light. This the distance one has to travel down the string to return to the same point in the wave cycle at any given instant in time. What is the difference between Rods and Cones? • Rods are rod-shaped, and cones are cone-shaped. 5 1 0 400 500 600700 0. Unique or custom power and wavelength configurations are also available with a single mode, PM fiber output. Introduction to CMOS Image Sensors. I want to convert that to a single RGB color to display on-screen, i. Every material on earth shows its own strength of reflection in each wavelength when it is exposed to the EM waves Sensors aboard a platform are capable to acquire the strength of reflection and radiation in each wavelength. Red has the longest wavelength, with each color decreasing away from it. This wavelength means a frequency range of roughly 430-750 terahertz (THz). Remember: White light contains the full spectrum of Red, Green, and Blue wavelengths. 400 500 600 700 0 20 40 60 80. Laser safety eyewear is designed to reduce the amount of incident light of specific wavelength to safe levels, while transmitting sufficient light for good vision. Components - 1 * RGB LED. Red, green, and blue chips have 625nm/520nm/465nm wavelengths respectively. Specifications of newly developed RGB laser module. I am aware that the "RGB to wavelength" question is widely covered on the net, and I know that there is no way to perform such a conversion. Depending on the color selected this filter will diminish all pixels that are not of the selected colors. Open your eyes to the exciting world of lighting with our extensive selection of sales tools and educational resources. The entire visual spectrum runs, approximately, from a wavelength of 400nm (blue) to 700nm (red). We can see. If green had the shortest wavelength, we would have a green sky. Our lasers feature excellent beam quality and wavelength stability, exceptional reliability and an impressive cost/performance ratio. As you can see the functions are not discrete wavelengths for RGB, but rather weighted averages of (overlapping) wavelengths. Wavelength and luminance about LCD monitor. The build is extremely simple because the wavelength of visible light being emitted is entirely digitally controlled by the RGB led. Subtractive colors (CMYK) are used for printing and are basically a complement to RGB. RGB Color: This is color based upon light. The results show that vein finder was successfully designed with controllable wavelength in the range of 600-696 nm using RGB LED. There are three activities included in this Concept Builder. In the visible light spectrum, the wavelength of red light ranges from 622 to 780 nanometers. Below I have spectral locus coordinates from 380nm to 780m in steps of 5 nm. The degree to which different wavelengths of light stimulate the three kinds of cones is messy; the graphs of intensity (of cone response) vs. Th crease waveleng rang greate lexibi allowin fo h s sing application ro hi ion. Most leaves of growing plants , such as trees and bushes , are green. Buy your 1506 from an authorized ADAFRUIT distributor. Phosphors convert light energy of one wavelength and redistribute that energy as a different wavelength. I ended up returning product because of lighting; the RGB lighting was very bright beneath/around the keys, but very dim under key letters. $\endgroup$ – Carl Witthoft Jan 19 '14 at 17:23. A large percentage of the visible spectrum (380 nm to 750 nm wavelength) can be created using these three colors. The creation of a violet from red and blue appears to be a puzzle since violet involves a shorter wavelength of light than blue. They measure color based on an RGB color model (red, green, blue). RGB-AC (Amber & Cyan) quinti-chromatic LEDs can be used for special lighting applications wherein there is a high importance for color accuracy and color rendering in the vision triang They can be used to cover the entire range of reasonable general CRI values while achieving a quasi-continuous broadband spectrum. This is my sixth article in C#. Built-in power supply reverse connect protection module, reversed power input will not damage the IC. This is general color science, not something specific to MATLAB. RGB in Gamut is CMYK RGB COLOR NUMBERS 0000ff = blue, ff0000 = red, 00ff00 = green, ffff00 = = yellow,. This wavelength also regulates flowering, dormancy periods, and seed germination. A spectral color is composed of a single wavelength and can be correlated with wavelength as shown in the chart below (a general guide and not a precise statement about color). This can be used to achieve a specific color on the light spectrum. Maybe you could map the wavelength to the HSV color model first, then convert it to RGB. CTE(Coefficient of Thermal Expansion: 열팽창 계수) [OpenCV]CvMat의 element 접근법; Pearson 상관계수(sample correlation coefficient) 평면상의 세 점으로부터 평면의 방정식 구하기. These SMLP36RGB LEDs are available in 470nm, 527nm, and 624nm wavelength with 35mcd and 110mcd luminous intensity. China Long Life Decorative LED Strip Light RGB Luces LED, Find details about China Luces LED, Flexible LED Strip Light from Long Life Decorative LED Strip Light RGB Luces LED - MSS LED Lighting Co. It includes all-in-one emitters in Red, Green, and Blue color on 16mm pitch, or 60 per meter. -- differing from each other in the purity of their primary colors, which affects their gamut-- they. This fact sheet reviews the basics regarding light and color and summarizes the most important color issues related to white light LEDs, including recent advances. Convert RGB to Hex color values here:. Desaturating chromaticity C by mixing with the white point chromaticity W yields A, a within-gamut approximation of C. Converting RGB Images to LMS Cone Activations Judah B. By convention, it has been decided by the International Commission on Illumination - CIE - that the primary red colour is light with a wavelength of 700 nm, green 546 nm and blue 436 nm. Continuous Spectrum. 9 V Power Peak Dissipation P D 100 80 80 mW Maximum Reverse Voltage V RM 5 5 5 V Dominant Wavelength λ d. hi it doesn’t work lots of variables are undefined and many errors, if you could fix this please because I am sure this is a good program. Angle Displacement 0. The cones are divided into three main categories depending on their wavelength specificity; namely, S- cone (short- wavelength sensitive cone), M- cones (middle- wavelength sensitive cone), and L- cone (long- wavelength sensitive cones). Small Basic Featured Program - Wavelength To RGB Converter ‎02-12-2019 04:35 PM. QBLP679E-RGB PLCC6 RGB LED ----- Product: QBLP679E-RGB Date: March 20, 2014 Page 5 of 10 Version# 2. Depending on the color selected this filter will diminish all pixels that are not of the selected colors. QBLP679E-RGB – Red, Green, Blue (RGB) 620nm Red, 525nm Green, 470nm Blue LED Indication - Discrete 2V Red, 3. Abstract Organic dirt on touch surfaces can be biological contaminants (microbes) or nutrients for those but is often invisible by the human eye causing challenges for evaluating the need for clean. This chart shows the relationship between wavelength and hue for an ideal observer viewing an ideal light source under ideal viewing conditions. 60 degree viewing angle. In addition, reflectance values for pleochroic materials are listed as R 1 and R 2 values. Why are they useful? There are three major reasons. The LED light emitted the blue wavelengths of light required for plant growth. But in RGB space, a three-dimensional array is needed. What you're actually looking for is the RGB triplet most closely approximates the color of the given pure wavelength. Red-Green-Blue wavelength TV holography and interferometry for Microsystems metrology Journal of Optics, 40, 176-183(2011). Thanks for your comments Dr. ImageJ can work with grayscale images having bit depths from 1 bit (binary images, showing just black or white pixels) to 32 bits per pixel. A bit scientific question, I am certain, but I need an algorithm for converting the RGB colour of an image to wavelength. As shown in the diagram below, the combiner internally consists of two fused fiber wavelength combiners that merge light from the three wavelength ports (ports 1 - 3) into a single output (common port). The test box beside each slider shows the relative proportions of red, blue and green on a scale from 1 to 255. Change the wavelength of a monochromatic beam or filter white light. Enter red, green and blue color levels (0. is there a way to convert these value to wavelength and get a plot of intensity as a function of wavelength ?. Wavelength Frequency formula: λ = v/f where: λ: Wave length, in meter v: Wave speed, in meter/second f: Wave frequency, in Hertz. Iske wavelengh 630–740 nm hae. RGB colors vs Wavelengths. 375 micron to 0. But, humans identify different colors in the visible range. This can be used to achieve a specific color on the light spectrum. Starting in August 2015, we begin with the targeted search and construction of a competent partner on site. We tend to think that by mixing RGB intensities that we can generate any colour or wavelength. 5 Relative Intensity vs. This is my sixth article in C#. Red is the color of some apples and mostly, raspberries. The input is wavelength in the range of 380 nm through 750 nm (violet through red). As you can see the functions are not discrete wavelengths for RGB, but rather weighted averages of (overlapping) wavelengths. Wavelength to RGB. In a similar manner for colors where wavelength groups are measured and binned, single wa velength groups will be shipped on any one reel. All possible colors can be specified according to hue, saturation, and brightness (also called brilliance ), just as colors can be. The rgb module contains a function that converts from a wavelength in nm to an 3-tuple of (R,G,B) values, each in the range 0--255. The wavelength of the color corresponding to the perceptual notion of hue. Applications. • RGB wavelengths can be demultiplexed after light propagation of 20 mm. Wavelength of Visible Light Spectrum We are capable of perceiving only a fraction of the electromagnetic spectrum. Macy's Presents: The Edit - A curated mix of fashion and inspiration Check It Out. Visible Wavelength Fused Coupler (RGB) PASSIVE OPTICS Page 1 of 2 Description: Go!Foton’s Visible Wavelength Fused Coupler utilizes Fused Biconical Tapering Technology to create a composite waveguide structure capable of splitting or combining light in the visible region (400 to 700nm). Four Band Digital Imagery INFORMATION SHEET April 2011 What is four band imagery? w Four band imagery is multispectral, which means that it is collected from several parts of the electromagnetic spectrum. The Wavelength Concept Builder is comprised of 36 questions. The color red green blue (RGB) colors when they are combined create white light. The calculated shift in ( u;v ) coordinates of an RGB-LED as the temperature is changed in increments of 20 C. 3 Principal schematic of the laser head A modelocked laser is used in the RGB system for two reasons: much higher peak intensities compared to continuous wave (cw) lasers are achieved, which are necessary for efficient non-linear frequency generation. The test box beside each slider shows the relative proportions of red, blue and green on a scale from 1 to 255. The so-called “near infrared” spectrum — from about 700 nanometers (the longest wavelength red we can see with our eyes) to around 1000 nm (the longest wavelength to which our camera sensors. LEDtronics Discrete LED Color Chart LEDtronics Code LED Chip Code. Rgb wavelengths products are most popular in Domestic Market, North America, and Western Europe. NanoCell Technology filters out dull colours to enhance the purity of the RGB spectrum. # ' @param wavelength A wavelength value, in nanometers, in the human visual range from 380 nm through 750 nm. This was done by using a FORTRAN program (linked below) that uses linear approximations for the RGB color coefficients. Th crease waveleng rang greate lexibi allowin fo h s sing application ro hi ion. # ' @param gamma The \eqn{\gamma} correction for a given display device. This is the most expensive and Trended photo effect. From wavelength to RGB filter 79 adjustable brightness. Our lasers feature excellent beam quality and wavelength stability, exceptional reliability and an impressive cost/performance ratio. The output of a camera sensor is RGB but does not have "primaries" in the sense that a display or RGB colorspace does. Type * in the search field. The question is how did they come out with this ratio? Is there a formula what can help derive this magic ratio based on wavelength?. video/x-raw-rgb, bpp=(int)32, depth=(int)32, endianness=(int)4321, red_mask=(int)255, green_mask=(int)65280. This is only an approximate conversion and will not appear the same on every display device. All you need to match your RGB and color data with paint, ink, color standards and commercial color collections. Laser Diodes by Wavelength Our extensive laser diode selection includes options with output in the 375 - 2000 nm range and powers up to 3 W. A large percentage of the visible spectrum (380 nm to 750 nm wavelength) can be created using these three colors. The UPPER RIGHT image displays short wavelength infrared bands 4, 6, and 8 as RGB. The peak of PCA component 2, expressed at 350 nm of fluorescence spectrum, is one of the few peaks of PCA components located in short wavelength. This note describes conversions from Bayer format to RGB and between RGB and YUV (YCrCb) color spaces. Not only that but there can be "metamers" - spectra that are completely different but that give the same RGB values. What is a true color astronomical image? Is it what an astronaut would see if there was an eye piece on, say, the Hubble Space Telescope? Or is it one that captures the intrinsic colors emitted by the stars, nebulae, and gas clouds in galaxies?. This is, in fact, not true. Hence, thermal cameras focus and detect the radiation in these wavelengths and usually translate it into a grayscale image for the heat representation. As you can see the functions are not discrete wavelengths for RGB, but rather weighted averages of (overlapping) wavelengths. Here we use Hue (the angle on the color wheel), Saturation (the amount of color/chroma) and Lightness (how bright the color is). The energy of a single photon of green light of a wavelength of 520 nm has an energy of 2. ImageJ can work with grayscale images having bit depths from 1 bit (binary images, showing just black or white pixels) to 32 bits per pixel. wavelength range, radiometric power efficiency increases as wavelength decreases. RGB demultiplexer based on polycarbonate multicore polymer optical fiber. The purpose of the article is to be able to build a class that allows any C# programmer to perform image processing functionality. The 'Completely Painless Programmer's Guide to XYZ, RGB' was written in the hope that it might be of use to technically savvy people who know a whole lot about the code and the mathematics that goes into making an image editing program, but perhaps not so much about color spaces and ICC profiles. Have you ever wondered what the color of a LED with 430nm wavelength looks like? Or what color a HeNe laser with 612nm has? Just enter the wavelength (in nm) and the app will show you what color. • RGB demultiplexer is composed of integrated polycarbonate rods along the PCF length. In normal human vision, wavelengths of between 400 nm and 700 nm are represented in the circle, where reds are the longer wavelength and blues and violets are the shorter wavelengths. This is the case in our example. What is a good way to convert a RGB pixel to a wavelength ? Actually, I want to simulate a cemera filter with four band-pass ( centered at 450 , 500 , 750 and 800 nm with respective bandwidths of. Accurate computer display of the color and RGB values of a particular wavelength of the visible spectrum is an application of both color and vision science and color display technology. Note that frequency and wavelength are inversely proportional. The RGB (Red, Green, Blue) color model is the most known, and the most used every day. These are both double wide and populated with single color emitters. At slightly longer wavelengths is infra-red light, invisible to humans but received by your skin, and recognized as heat by your body. As the temperature of a star increases, the peak of its continuous spectrum shifts to shorter (bluer) wavelengths. Matthew Francis - Apr 30, 2012 2:10 pm UTC. In order to ensure availability, single wavelength groups will not be orderable Note • Wavelengths are tested at a current pulse duration of 25 ms and an accuracy of ± 1 nm. It's not possible to unambiguously calculate Radiant Exitance from RGB values because many different SPD's may resulting different Radiant Exitance but project to the same RGB. But in RGB space, a three-dimensional array is needed. This is general color science, not something specific to MATLAB. Item Condition Time/Cycle. Draft, Revision 111 Page 6 3. Gould's Combiners and OCT products are made of single-mode or PM optical fibers enabling MUX/DEMUX function for any 2, 3, or 4 wavelengths in the RGB/RGBV spectrum. It includes all-in-one emitters in Red, Green, and Blue color on 16mm pitch, or 60 per meter. It differs from previous methods in that it attempts to create physically plausible spectra for reflectances. The MAX44005 integrates 7 sensors in one product: red, green, blue (RGB) sensors; an ambient light (clear) sen. Purpose = For improved fluorescence retrieval and to better account for smile together with the bands 665 and 680 nm. The observer’s task was to modify the brightness of each of these three primary colors till he obtained a color identical with the sample one. The questions are divided into 12 different question groups. OD 4 Notch Filters feature narrow rejection bands of just ±2. It is characterised by either λ or energy of light (E). Fiber Optic Communications. QBLP679E-RGB PLCC6 RGB LED ----- Product: QBLP679E-RGB Date: March 20, 2014 Page 5 of 10 Version# 2. Abstract A convenient solution to RGB-Infrared photography is to extend the basic RGB mosaic with a fourth filter type with high transmittance in the. RGB to Wavelength, is it possible? - MATLAB Answers Mathworks. Wavelengths: 400 – 2000 nm; Power range: 20 mW–100 W; unified product design and control interface; PC user interface available; One size fits all? We don’t think so, either. Should find the nearest color names from the XKCD color survey instead (and “nearest” should be defined as distance in L*a*b* space, not in RGB space). The Wavelength Tunable LED demo app can be used to assess the emission properties of a AlGaN/InGaN light-emitting diode in order to assist in the design of LEDs that emit within a user-specified wavelength range. com > Wavelength_to_RGB. To determine the colour output by an RGB display, you need to know the colour (chromaticity) of each of the 3 primary colours. Visible Wavelength Fused Coupler (RGB) PASSIVE OPTICS Page 1 of 2 Description: Go!Foton’s Visible Wavelength Fused Coupler utilizes Fused Biconical Tapering Technology to create a composite waveguide structure capable of splitting or combining light in the visible region (400 to 700nm). Nevertheless, the picture above will allow us to understand what actually happens. Positive rating current 20ma, the maximum peak current does not exceed 100ma. Hence, thermal cameras focus and detect the radiation in these wavelengths and usually translate it into a grayscale image for the heat representation. *Please refer to the Specification sheet about Taping. If you want to make a similar display, this RGB LED matrix will meet you need. This lesson provides an overview of meteorological and environmental RGB products, namely, how they are constructed and how to use them. Adobe RGB Color Space Specification October 12, 2004. How do I convert this value to rgb? The xcolor package goes over some conversions on. Scriabin's theory was that each note in the octave could be associated with a specific colour, and in Prometheus, the Poem of Fire, he wrote the colours and music to match. Without light, there would be no color, and hence no RGB World. stochastically assigns a wavelength to each photon, maintaining 1 nm sampled spec- Journal of Computer Graphics Techniques Simple Analytic Approximations to the CIE XYZ Color Matching Functions. *Part name is individual for each rank. RGB to HSV color conversion. *3 Collimated light beams are light beams with minimal divergence. Single-mode fiber for transmitting wavelengths above 800nm. Unit A 615 620 B 620 625 nm C 625 630 Dominant Wavelength λ D for True Green @ I F =20mA Bin Min. 02 - PhET Interactive Simulations. Red light has the longest wavelength and the shortest frequency. NATURAL_COLOR_RGBI —Create a 4-band mosaic dataset, with red, green, blue, and near infrared wavelength ranges. RGB Dominant Wavelength: You are covered by the eBay Money Back Guarantee if you receive an item that is not as described in the listing. The vast majority of imaging applications involve the use of more than one fluorescent probe e. [2] [3] This is essentially opposite to the subtractive color model, particularly the CMY color model , that applies to paints, inks, dyes, and other substances. By varying how brightly each of the red, green and blue channels shine, the combined wavelengths can stimulate the eye in literally millions of ways. Abstract Organic dirt on touch surfaces can be biological contaminants (microbes) or nutrients for those but is often invisible by the human eye causing challenges for evaluating the need for clean. OD 4 Notch Filters feature narrow rejection bands of just ±2. Find the right LEDs and LED modules for your lighting needs. Basically, I am going to make a graph/plot in Matlab, for an intensity vs wavelength graph of an image of a spectrum. It should be emphasised that this is a 'device-independent' colour space in which each primary colour (X,Y,Z) is always constant, unlike RGB which varies with every individual device (monitor, scanner, camera, etc. GitHub Gist: instantly share code, notes, and snippets. Reflective/transmissive: the reflection or transmission spectrum (0-100% at each wavelength) is multiplied by this illuminant before the individual color values (X, Y, Z) are computed. Need a free color converter? Nix can help! Download our free Android and iOS app for quick and easy CMYK, RGB, LAB, or XYZ conversions or use our online tool. Integer encodings shall be unsigned with 8, 10, 12, or 16 bits per component with the same number of bits for all three components. MUTHU et al. In the RGB color model the brightest violet is (255,0,255). Subtractive color starts with an object that reflects light and uses colors to subtract parts of the white light illuminating the object to produce other colors. A typical human eye will respond to wavelengths from about 380 to 750 nm. Gould's truly fused Visible Wavelength RGB (Red, Blue & Green) polarization maintaining (PM) 1×2 and 2×2 fiber optic couplers, Taps and Optical signal splitters offer high extinction ratio (ER) with polarization launch into a single or dual axis (Slow axis & Fast axis). RGB uses additive colour mixing and is the basic colour model used in television or any other medium that projects colour with light. What does this Hex to RGB converter do? It takes input in the form of a hex color code value and converts that value to a RGB value that can be used to specify color in photo editing software. The requested chromaticity C cannot be achieved by mixing the given RGB primaries. video/x-raw-rgb, bpp=(int)32, depth=(int)32, endianness=(int)4321, red_mask=(int)255, green_mask=(int)65280. edu Abstract Hyperspectral images provide higher spectral resolu-tion than typical RGB images by including per-pixel ir-. LED, LED Light, Strip, FPC, Flex Strip, Rigid Strip, Panel, LED Strip Light, LED Panel Light, LED Linear Light, LED Driver, LED Power Supply, LED Controller, LED. # ' @param gamma The \eqn{\gamma} correction for a given display device. RGB Combiner Design. stochastically assigns a wavelength to each photon, maintaining 1 nm sampled spec- Journal of Computer Graphics Techniques Simple Analytic Approximations to the CIE XYZ Color Matching Functions. The Wavelength Concept Builder is comprised of 36 questions. Try to get the circle to be between 4 to 6 inches in diameter. For example, a red phosphor will convert blue light energy, and re-emit red wavelength energy. RGB colors vs Wavelengths. Positive rating current 20ma, the maximum peak current does not exceed 100ma. RGB Photonics offers a wide variety of laser modules suitable for a large range of applications. Pricing and Availability on millions of electronic components from Digi-Key Electronics. This would mean the combination of intensities. RGB Photonics GmbH is planning the market consolidation in China through financial support from ERDF funds through the foreign economic center of Bavaria. A tuple of integers for (R, G, B) is returned. I have the following color: \definecolor{SEviolet}{wave}{377} Which is a violet with a wavelength of 377nm. M0400016 Issued: June 2018 Supersedes: March 2010. $\endgroup$ – Carl Witthoft Jan 19 '14 at 17:23. Following is a brief description of the different colours and their Sound and frequencies. “I need a broader range for my fluorophores”: Only a few wavelength options existed in 2006, and now the CoolLED pE-4000 has 16, spanning 365 nm-770 nm. • Cones in the eye respond to three colors: red, green, blue – 6 to 7 million cones in human eye – 65% cones respond to red eye – 33% cones respond to green light – 2% cones respond to blue light, these being most sensitive – Red, green, and blue are known as primary colors ∗ In 1931, CIE designated specific wavelengths for primary colors. This is the basic.
{}
# Invert $f(x)=\frac{E}{4\pi D \mid x \mid}e^{\frac{-\mid x \mid}{\sqrt{DT}}}$ I'm trying to invert: $f(x)=\frac{E}{4\pi D \mid x \mid}e^{\frac{-\mid x \mid}{\sqrt{DT}}}$ Where E,D and T are just some arbitrary real parameters. Mathematica ends up with an expression in terms of ProductLog, which is the Lambert W-Function: the inverse of $g(W)=We^W$, that looks like $$\sqrt{DT} ProductLog(\frac{E}{4\sqrt{T} D^{\frac{3}{2}} \pi y})$$ but I'd like to arrive at it myself to see how it does it. I've only made the obvious steps: $$y=\frac{E}{4\pi D \mid x \mid}e^{\frac{-\mid x \mid}{\sqrt{DT}}}\Rightarrow\sqrt{\Delta T} log(\frac{rate}{4\pi D y})=log(\mid x \mid)\mid x \mid$$ and I imagine the Lambert W-Function will come in somewhere here. Write your equation as $t = e^{-s}/s$, where $s = |x|/\sqrt{DT}$ and $t = 4 \pi D^{3/2} T^{1/2} y$, and then as $s e^s = 1/t$. Thus $s = W(1/t)$. $$a\log a=(\log a)e^{\log a}$$ where $a=|x|$. Now take the Lambert W function.
{}
# That's a lot of squares Discrete Mathematics Level 3 $\displaystyle \sum _{ j=0 }^{ 100 }{ {\binom{100}{j} }^{ 2 } } = \binom{m}{n}$ If $$m,n$$ are positive integers that satisfy the equation above, determine the minimum value of $$m+n$$. ×
{}
Can UV photons spontaneously convert into IR photons, and vice versa? 1. May 24, 2013 edevere My general question is: can high energy photons convert into many lower energy photons? Could the reverse reaction occur spontaneously? Let's say we have a single photon that was emitted from a distant supernova. We detect it here on Earth. The photon hasn't converted into multiple lower energy photons during the path from the supernova to the Earth. It just gets red-shifted as space expands. So, to start we have: Energy = 12 eV, Spin = 1, Momentum = 12 eV/c, Charge = 0 If the photon could split into 3 lower energy photons of Spin (+1,-1,+1) all in the same direction, we would have: Energy = 4+4+4=12 eV, Spin = 1-1+1=1, Momentum = 4+4+4=12 eV/c, Charge = 0+0+0=0 Since bosons are allowed to be in the same energy state, we could have all 3 new photons be exactly the same energy. Though, the energy values could have be any number of different combinations. What law of physics prevents this splitting from happening? And vice versa, what prevents the 3 photons from converting into 1 higher energy photon? Thanks, Eddie 2. May 24, 2013 mathman There is no law of physics to describe what you are looking for (either way). 3. May 24, 2013 DrChinese On their own, I don't believe so. (Or the likelihood is too small to be of any consideration.) There is a process called Spontaneous Down Conversion (and its reverse Up Conversion) in which 1 photon becomes 2 and vice versa. However, a special crystal is required to make that happen. http://en.wikipedia.org/wiki/Spontaneous_parametric_down-conversion 4. May 24, 2013 DrDu I think this should be possible. You can write down a Feynman diagram with an electron loop with four vertices. On one vertex, the uv photon is absorbed, on the other three, an ir photon is being emitted. Probably only very improbable. Ah, see here: http://en.wikipedia.org/wiki/Delbruck_scattering 5. May 24, 2013 DrDu 6. May 24, 2013 fzero In an external electromagnetic field, it is possible to for a single photon to split into two or more lower energy photons. The process is related to the "scattering of light by light" diagram in DrDu's link. In the absence of an external field, the splitting of a single photon into other photons is impossible by a kinematical argument, discussed in this thread. The details of the kinematic argument are summarized in this post. I didn't discuss the collinearity constraint in detail there. That is the point that, since the initial photon has no center of mass frame, all of the photons in the process must have their momenta on the same line. 7. May 24, 2013 DrDu Very interesting. It should be possible to describe this in terms of a classical picture drawing cute little rotating arrows, don't you think so? 8. May 24, 2013 fzero This would be a one-loop effect, so I don't think there's a purely classical version of the argument. It is true that the Euler–Heisenberg Lagrangian vanishes for a sum of collinear EM waves, since $\mathbf{E}\cdot \mathbf{B} = \mathbf{E}^2 - \mathbf{B}^2 =0$. 9. May 24, 2013 Bill_K There's a classical limit in which photon-photon scattering is described by adding quartic terms to the usual Lagrangian, such as FαβFβγFγδFδα. 10. May 24, 2013 Bill_K 11. May 25, 2013 DrDu Of course, but the relevant Feynman diagram is drawn in the article. 12. May 25, 2013
{}
# Kinetic energy 476pages on this wiki Kinetic energy is a form of energy all bodies have when moving. Unless at absolute zero, all particles have kinetic energy. The formula for kinetic energy is $0.5*mass*velocity^2$. If an object is dropped form a height then, just before it hits the ground, it is said that $K.E.=G.P.E.$ where the gravitational potential energy at the beginning, $G.P.E.=mgh$, is equal to the kinetic energy just before it hits the surface $0.5*mass*velocity^2$. This can be used to find the speed of an object or the height from which it was dropped.
{}
# limit • March 6th 2010, 02:38 AM Dgphru limit sorry im very dumb.. can someone please help me with this: $ lim_{x-> 0}$ $\frac{sec^2x}{2}$ • March 6th 2010, 02:49 AM Prove It Quote: Originally Posted by Dgphru sorry im very dumb.. can someone please help me with this: $ lim_{x-> 0}$ $\frac{sec^2x}{2}$ $\frac{\sec^2{x}}{2} = \frac{1}{2\cos^2{x}}$. So $\lim_{x \to 0}\frac{\sec^2{x}}{2} = \lim_{x \to 0}\frac{1}{2\cos^2{x}}$ $= \frac{1}{2(\cos{0})^2}$ $= \frac{1}{2(1)^2}$ $= \frac{1}{2}$.
{}
Also, by means of the iterative calculation, the sum of all pages' PageRanks still converges to the total number of web pages. So the average PageRank of a web page is 1. The minimum PageRank of a page is given by (1-d). Therefore, there is a maximum PageRank for a page which is given by dN+(1-d), where N is total number of web pages. This maximum can theoretically occur, if all web pages solely link to one page, and this page also solely links to itself. Great Post, I am agree with you. currently Google keeps change in algorithmic program methods thus in gift state of affairs everybody ought to have an honest quality website, quality content. Content is quality {and ought to|and will|and may} be contemporary on your web site and conjointly it should be associated with the subject. it’ll assist you in your ranking. ## [43] Katja Mayer views PageRank as a social network as it connects differing viewpoints and thoughts in a single place. People go to PageRank for information and are flooded with citations of other authors who also have an opinion on the topic. This creates a social aspect where everything can be discussed and collected to provoke thinking. There is a social relationship that exists between PageRank and the people who use it as it is constantly adapting and changing to the shifts in modern society. Viewing the relationship between PageRank and the individual through sociometry allows for an in-depth look at the connection that results. Our team is made up of industry-recognized thought leaders, social media masters, corporate communications experts, vertical marketing specialists, and internet marketing strategists. Members of the TheeTeam host SEO MeetUp groups and actively participate in Triangle area marketing organizations. TheeDigital is an active sponsor of the AMA Triangle Chapter. Internet Marketing Inc. is one of the fastest growing full service Internet marketing agencies in the country with offices in San Diego, and Las Vegas. We specialize in providing results driven integrated online marketing solutions for medium-sized and enterprise brands across the globe. Companies come to us because our team of well-respected industry experts has the talent and creativity to provide your business with a more sophisticated data-driven approach to digital marketing strategy. IMI works with some clients through IMI Ventures, and their first product is VitaCup. There are simple and fast random walk-based distributed algorithms for computing PageRank of nodes in a network.[33] They present a simple algorithm that takes {\displaystyle O(\log n/\epsilon )} rounds with high probability on any graph (directed or undirected), where n is the network size and {\displaystyle \epsilon } is the reset probability ( {\displaystyle 1-\epsilon } is also called as damping factor) used in the PageRank computation. They also present a faster algorithm that takes {\displaystyle O({\sqrt {\log n}}/\epsilon )} rounds in undirected graphs. Both of the above algorithms are scalable, as each node processes and sends only small (polylogarithmic in n, the network size) number of bits per round. A: I pretty much let PageRank flow freely throughout my site, and I’d recommend that you do the same. I don’t add nofollow on my category or my archive pages. The only place I deliberately add a nofollow is on the link to my feed, because it’s not super-helpful to have RSS/Atom feeds in web search results. Even that’s not strictly necessary, because Google and other search engines do a good job of distinguishing feeds from regular web pages. If you can leave a guest post, leave it. Why? Because it can create relevant referral traffic to the website, you own. Everything you should do is to make your post valuable and without spam. Just important core information which won’t be spoiled by backlinks injecting. It’s better to have contextual linking. In other words, the links are to merge into your text. By now, you've likely seen all the "gurus" in your Facebook feed. Some of them are more popular than others. What you'll notice is that the ads you see that have the highest views and engagement are normally the most successful. Use a site like Similar Web to study those ads and see what they're doing. Join their lists and embed yourself in their funnels. That's an important part of the process so that you can replicate and reverse engineer what the most successful marketers are doing. Understand that whatever you're going to do, you'll need traffic. If you don't have any money at the outset, your hands will be tied no matter what anyone tells you. The truth is that you need to drive traffic to your offers if you want them to convert. These are what we call landing pages or squeeze pages. This is where you're coming into contact with the customers, either for the first time or after they get to know you a little bit better. I first discovered Sharpe years ago online. His story was one of the most sincere and intriguing tales that any one individual could convey. It was real. It was heartfelt. It was passionate. And it was a story of rockbottom failure. It encompassed a journey that mentally, emotionally and spiritually crippled him in the early years of his life. As someone who left home at the age of 14, had a child at 16, became addicted to heroin at 20 and clean four long years later, the cards were definitely stacked up against him. I was exactly thinking the same thing what Danny Sullivan had said. If comments (even with nofollow) directly affect the outgoing PR distribution, people will tend to allow less comments (maybe usage of iframes even). Is he right? Maybe, Google should develop a new tag as well something like rel=”commented” to inform spiders about it to give less value and wordpress should be installed default with this attribute 🙂 What i have learnt with comments only allow them if they give value to your blog i have used this for one of my main blogs bpd and me and it worked i have let comments threw witch were spamee and it just got a google page rank of 2 after a year learning by mistakes google page rank is always going to be a mystery and people will try to beat it they might for a short period after that they get caught out but the people who write good quality content will be the winners and keep writing quality content a question might be does google count how many no follows there are i wounder Should have added in my previous comment that our site has been established since 2000 and all our links have always been followable – including comment links (but all are manually edited to weed out spambots). We have never artificially cultivated backlinks but I have noticed that longstanding backlinks from established sites like government and trade organisations are changing to ‘nofollow’ (and our homepage PR has declined from 7 to 4 over the past 5 years). If webmasters of the established sites are converting to systems which automatically change links to ‘nofollow’ then soon the only followable links will be those that are paid for – and the blackhats win again. “So what happens when you have a page with “ten PageRank points” and ten outgoing links, and five of those links are nofollowed? Let’s leave aside the decay factor to focus on the core part of the question. Originally, the five links without nofollow would have flowed two points of PageRank each (in essence, the nofollowed links didn’t count toward the denominator when dividing PageRank by the outdegree of the page). More than a year ago, Google changed how the PageRank flows so that the five links without nofollow would flow one point of PageRank each.” PageRank is a link analysis algorithm and it assigns a numerical weighting to each element of a hyperlinked set of documents, such as the World Wide Web, with the purpose of "measuring" its relative importance within the set. The algorithm may be applied to any collection of entities with reciprocal quotations and references. The numerical weight that it assigns to any given element E is referred to as the PageRank of E and denoted by {\displaystyle PR(E).} Other factors like Author Rank can contribute to the importance of an entity. I agree that the more facts that you provide and if you were to provide the complete algorithm, people would abuse it but if it were available to everyone, would it not almost force people to implement better site building and navigation policies and white hat seo simply because everyone would have the same tools to work with and an absolute standard to adhere to. Another excellent guide is Google’s “Search Engine Optimization Starter Guide.” This is a free PDF download that covers basic tips that Google provides to its own employees on how to get listed. You’ll find it here. Also well worth checking out is Moz’s “Beginner’s Guide To SEO,” which you’ll find here, and the SEO Success Pyramid from Small Business Search Marketing. # Well, to make things worse, website owners quickly realized they could exploit this weakness by resorting to “keyword stuffing,” a practice that simply involved creating websites with massive lists of keywords and making money off of the ad revenue they generated. This made search engines largely worthless, and weakened the usefulness of the Internet as a whole. How could this problem be fixed? I am not worried by this; I do agree with Danny Sullivan (Great comment Danny, best comment I have read in a long time). I will not be changing much on my site re: linking but it is interesting too see that Google took over a year to tell us regarding the change, but was really happy to tell us about rel=”nofollow” in the first place and advised us all to use it. When Googlebot crawls a page, it should see the page the same way an average user does15. For optimal rendering and indexing, always allow Googlebot access to the JavaScript, CSS, and image files used by your website. If your site's robots.txt file disallows crawling of these assets, it directly harms how well our algorithms render and index your content. This can result in suboptimal rankings. Just as some backlinks you earn are more valuable than others, links you create to other sites also differ in value. When linking out to an external site, the choices you make regarding the page from which you link (its page authority, content, search engine accessibility, and so on) the anchor text you use, whether you choose to follow or nofollow the link, and any other meta tags associated with the linking page can have a heavy impact on the value you confer. My favorite tool to spy on my competitors' backlinks is called Monitor Backlinks. It allows you to add your four most important competitors. From then on, you get a weekly report containing all the new links they have earned. Inside the tool, you get more insights about these links and can sort them by their value and other SEO metrics. A useful feature is that all the links my own website already has are highlighted in green, as in the screenshot below. Search engines find and catalog web pages through spidering (also known as webcrawling) software. Spidering software "crawls" through the internet and grabs information from websites which is used to build search engine indexes. Unfortunately, not all search engine spidering software works the same way, so what gives a page a high ranking on one search engine may not necessarily give it a high ranking on another. Note that rather than waiting for a search engine to discover a newly created page, web designers can submit the page directly to search engines for cataloging. The PageRank theory holds that an imaginary surfer who is randomly clicking on links will eventually stop clicking. The probability, at any step, that the person will continue is a damping factor d. Various studies have tested different damping factors, but it is generally assumed that the damping factor will be set around 0.85.[5] In applications of PageRank to biological data, a Bayesian analysis finds the optimal value of d to be 0.31.[24] Consumers today are driven by the experience. This shift from selling products to selling an experience requires a connection with customers on a deeper level, at every digital touch point. TheeDigital’s internet marketing professionals work to enhance the customer experience, grow your online presence, generate high-quality leads, and solve your business-level challenges through innovative, creative, and tactful internet marketing. No PageRank would ever escape from the loop, and as incoming PageRank continued to flow into the loop, eventually the PageRank in that loop would reach infinity. Infinite PageRank isn’t that helpful 🙂 so Larry and Sergey introduced a decay factor–you could think of it as 10-15% of the PageRank on any given page disappearing before the PageRank flows along the outlinks. In the random surfer model, that decay factor is as if the random surfer got bored and decided to head for a completely different page. You can do some neat things with that reset vector, such as personalization, but that’s outside the scope of our discussion. An Internet marketing campaign is not an isolated, one-off proposal. Any company that plans on using it once is certain to continue to use it. An individual who is knowledgeable about all aspects of an Internet marketing campaign and who has strong interpersonal skills is well-suited to maintain an ongoing managerial role on a dedicated marketing team. Mathematical PageRanks for a simple network, expressed as percentages. (Google uses a logarithmic scale.) Page C has a higher PageRank than Page E, even though there are fewer links to C; the one link to C comes from an important page and hence is of high value. If web surfers who start on a random page have an 85% likelihood of choosing a random link from the page they are currently visiting, and a 15% likelihood of jumping to a page chosen at random from the entire web, they will reach Page E 8.1% of the time. (The 15% likelihood of jumping to an arbitrary page corresponds to a damping factor of 85%.) Without damping, all web surfers would eventually end up on Pages A, B, or C, and all other pages would have PageRank zero. In the presence of damping, Page A effectively links to all pages in the web, even though it has no outgoing links of its own. One final note is that if the links are not directly related to the subject, or you have no control over them, such as commentors’ website links, maybe you should consider putting them on another page, which links to your main content. That way you don’t leak page rank, and still gain hits from search results from the content of the comments. I may be missing something but this seems to mean that you can have your cake and eat it, and I don’t even think it is gaming the system or against the spirit of it. You might even gain a small sprinkling of page rank if the comment page accumulates any of it’s own. As of October 2018 almost 4.2 billion people were active internet users and 3.4 billion were social media users (Statista). China, India and the United States rank ahead all other countries in terms of internet users. This gives a marketer an unprecedented number of customers to reach with product and service offerings, available 24 hours a day, seven days a week. The interactive nature of the internet facilitates immediate communication between businesses and consumers, allowing businesses to respond quickly to the needs of consumers and changes in the marketplace. I think Google will always be working to discern and deliver “quality, trustworthy” content and I think analyzing inbound links as endorsements is a solid tool the SE won’t be sunsetting anytime soon. Why would they? If the president of the United States links to your page that is undoubtedly an endorsement that tells Google you’re a legitimate trusted source. I know that is an extreme example, but I think it illustrates the principals of a linking-as-endorsement model well. 3. General on-site optimization. On-site optimization is a collection of tactics, most of which are simple to implement, geared toward making your website more visible and indexable to search engines. These tactics include things like optimizing your titles and meta descriptions to include some of your target keywords, ensuring your site’s code is clean and minimal, and providing ample, relevant content on every page. I’ve got a huge list of on-site SEO tactics you can check out here. Moreover, the PageRank mechanism is entirely general, so it can applied to any graph or network in any field. Currently, the PR formula is used in bibliometrics, social and information network analysis, and for link prediction and recommendation. It's even used for system analysis of road networks, as well as biology, chemistry, neuroscience, and physics. Matt, in almost every example you have given about “employing great content” to receive links naturally, you use blogs as an example. What about people that do not run blog sites (the vast majority of sites!), for example an E-Com site selling stationary? How would you employ “great content” on a site that essentially sells a boring product? Is it fair that companies that sell uninteresting products or services should be outranked by huge sites like Amazon that have millions to spend on marketing because they cant attract links naturally? We regard a small web consisting of three pages A, B and C, whereby page A links to the pages B and C, page B links to page C and page C links to page A. According to Page and Brin, the damping factor d is usually set to 0.85, but to keep the calculation simple we set it to 0.5. The exact value of the damping factor d admittedly has effects on PageRank, but it does not influence the fundamental principles of PageRank. So, we get the following equations for the PageRank calculation: Email marketing - Email marketing in comparison to other forms of digital marketing is considered cheap; it is also a way to rapidly communicate a message such as their value proposition to existing or potential customers. Yet this channel of communication may be perceived by recipients to be bothersome and irritating especially to new or potential customers, therefore the success of email marketing is reliant on the language and visual appeal applied. In terms of visual appeal, there are indications that using graphics/visuals that are relevant to the message which is attempting to be sent, yet less visual graphics to be applied with initial emails are more effective in-turn creating a relatively personal feel to the email. In terms of language, the style is the main factor in determining how captivating the email is. Using casual tone invokes a warmer and gentle and inviting feel to the email in comparison to a formal style. For combinations; it's suggested that to maximize effectiveness; using no graphics/visual alongside casual language. In contrast using no visual appeal and a formal language style is seen as the least effective method.[48] There's a lot to learn when it comes to the internet marketing field in general, and the digital ether of the web is a crowded space filled with one know-it-all after another that wants to sell you the dream. However, what many people fail to do at the start, and something that Sharpe learned along the way, is to actually understand what's going on out there in the digital world and how businesses and e-commerce works in general, before diving in headfirst. While most search engine companies try to keep their processes a secret, their criteria for high spots on SERPs isn't a complete mystery. Search engines are successful only if they provide a user links to the best Web sites related to the user's search terms. If your site is the best skydiving resource on the Web, it benefits search engines to list the site high up on their SERPs. You just have to find a way to show search engines that your site belongs at the top of the heap. That's where search engine optimization (SEO) comes in -- it's a collection of techniques a webmaster can use to improve his or her site's SERP position. A breadcrumb is a row of internal links at the top or bottom of the page that allows visitors to quickly navigate back to a previous section or the root page. Many breadcrumbs have the most general page (usually the root page) as the first, leftmost link and list the more specific sections out to the right. We recommend using breadcrumb structured data markup28 when showing breadcrumbs. PageRank as a visible score has been dying a slow death since around 2010, I’d say. Pulling it from the Google Toolbar makes it official, puts the final nail in the visible PageRank score coffin. The few actually viewing it within Internet Explorer, itself a depreciated browser, aren’t many. The real impact in dropping it from the toolbar means that third parties can no longer find ways to pull those scores automatically. A backlink is simply a link from another domain pointing back to your site. Simple, right? Well, yes and no. Not all backlinks are created equal, and there are a few rules you need to observe in order for them to benefit your site’s ranking. For example, your link needs to be a clickable hyperlink with anchor text; for example, www. conductor .com won’t help us in link building, but www.conductor.com will. Make it as easy as possible for users to go from general content to the more specific content they want on your site. Add navigation pages when it makes sense and effectively work these into your internal link structure. Make sure all of the pages on your site are reachable through links, and that they don't require an internal "search" functionality to be found. Link to related pages, where appropriate, to allow users to discover similar content. # I just wanted to thank you for the awesome email of information. It was so awesome to see the results I have gotten and the results that your company has provided for other companies. Truly remarkable. I feel so blessed to be one of your clients. I do not feel worthy but do feel very blessed and appreciative to been a client for over 5 years now. My business would not be where it is today without you, your company and team. I sure love how you are dedicated to quality. I can not wait to see what the next 5 years bring with 10 years of internet marketing ninjas as my secret weapon. John B. SEO often involves the concerted effort of multiple departments within an organization, including the design, marketing, and content production teams. While some SEO work entails business analysis (e.g., comparing one’s content with competitors’), a sizeable part depends on the ranking algorithms of various search engines, which may change with time. Nevertheless, a rule of thumb is that websites and webpages with higher-quality content, more external referral links, and more user engagement will rank higher on an SERP. Excellent post! I’m reasonably savvy up to a certain point and have managed to get some of my health content organically ranking higher than WebMD. It’s taken a long time building strong backlinks from very powerful sites (HuffingtonPost being one of them), but I am going to take some time, plow through a few beers, and then get stuck into implementing some of these suggestions. Keep up the great work amigo. Cheers, Bill Getting unique and authoritative links is crucial for higher ranking in the SERPs and improving your SEO. Google's algorithm on evaluation of links evolved in recent years creating a more challenging process now to get high quality backlinks. External links still matter and aren’t obsolete, so start working on strategies to get valuable backlinks to improve your search visibility. Organic SEO's flip-side offers up a paid method for marketing on search engines like Google. SEM provides an avenue for displaying ads through networks such as Google's Adwords and other paid search platforms that exist across the web throughout social media sites like Facebook, Instagram and even video sites like YouTube, which, invariably, is the world's second largest search engine. I first discovered Sharpe years ago online. His story was one of the most sincere and intriguing tales that any one individual could convey. It was real. It was heartfelt. It was passionate. And it was a story of rockbottom failure. It encompassed a journey that mentally, emotionally and spiritually crippled him in the early years of his life. As someone who left home at the age of 14, had a child at 16, became addicted to heroin at 20 and clean four long years later, the cards were definitely stacked up against him. ## You’ll want to use email, blogging, and social media tactics to increase brand awareness, cultivate a strong online community, and retain customer loyalty. Consider sending personalized emails to past customers to impress or inspire them -- for instance, you might send discounts based off what they’ve previously purchased, wish them a happy birthday, or remind them of upcoming events. ###### Mega-sites, like http://news.bbc.co.uk have tens or hundreds of editors writing new content – i.e. new pages - all day long! Each one of those pages has rich, worthwile content of its own and a link back to its parent or the home page! That’s why the Home page Toolbar PR of these sites is 9/10 and the rest of us just get pushed lower and lower by comparison… People think about PageRank in lots of different ways. People have compared PageRank to a “random surfer” model in which PageRank is the probability that a random surfer clicking on links lands on a page. Other people think of the web as an link matrix in which the value at position (i,j) indicates the presence of links from page i to page j. In that case, PageRank corresponds to the principal eigenvector of that normalized link matrix. Keyword analysis. From nomination, further identify a targeted list of key­words and phrases. Review competitive lists and other pertinent industry sources. Use your preliminary list to determine an indicative number of recent search engine queries and how many websites are competing for each key­word. Prioritize keywords and phrases, plurals, singulars and misspellings. (If search users commonly misspell a keyword, you should identify and use it). Please note that Google will try to correct the term when searching, so use this with care. Try using Dribble to find designers with good portfolios. Contact them directly by upgrading your account to PRO status, for just $20 a year. Then simply use the search filter and type "infographics." After finding someone you like, click on "hire me" and send a message detailing your needs and requesting a price. Fiver is another place to find great designers willing to create inexpensive infographics. Search engines use complex mathematical algorithms to guess which websites a user seeks. In this diagram, if each bubble represents a website, programs sometimes called spiders examine which sites link to which other sites, with arrows representing these links. Websites getting more inbound links, or stronger links, are presumed to be more important and what the user is searching for. In this example, since website B is the recipient of numerous inbound links, it ranks more highly in a web search. And the links "carry through", such that website C, even though it only has one inbound link, has an inbound link from a highly popular site (B) while site E does not. Note: Percentages are rounded. In today’s world, QUALITY is more important than quantity. Google penalties have caused many website owners to not only stop link building, but start link pruning instead. Poor quality links (i.e., links from spammy or off-topic sites) are like poison and can kill your search engine rankings. Only links from quality sites, and pages that are relevant to your website, will appear natural and not be subject to penalty. So never try to buy or solicit links — earn them naturally or not at all. The flood of iframe and off-page hacks and plugins for WordPress and various other platforms might not come pouring in but I’m willing to bet the few that come in will begin to get prominence and popularity. It seemed such an easy way to keep control over PR flow offsite to websites you may not be ‘voting for’ and afterall, isn’t that way a link has always represented. It would seem Google should catch up with the times. Numerous academic papers concerning PageRank have been published since Page and Brin's original paper.[5] In practice, the PageRank concept may be vulnerable to manipulation. Research has been conducted into identifying falsely influenced PageRank rankings. The goal is to find an effective means of ignoring links from documents with falsely influenced PageRank.[6] The Open Directory Project (ODP) is a Web directory maintained by a large staff of volunteers. Each volunteer oversees a category, and together volunteers list and categorize Web sites into a huge, comprehensive directory. Because a real person evaluates and categorizes each page within the directory, search engines like Google use the ODP as a database for search results. Getting a site listed on the ODP often means it will show up on Google. Honestly, this I’ve read your blog for about 4 or 5 years now and the more I read the less I cared about creating new content online because it feels like even following the “Google Rules” still isn’t the way to go because unlike standards, there is no standard. You guys can change your mind whenever you feel like and I can become completely screwed. So screw it. I’m done trying to get Google to find my site. With Twitter and other outlets and 60% of all Google usage is not even finding site but Spell Check, I don’t care anymore. Private corporations use Internet marketing techniques to reach new customers by providing easy-to-access information about their products. The most important element is a website that informs the audience about the company and its products, but many corporations also integrate interactive elements like social networking sites and email newsletters. Most schools / universities have just an [email protected]… or [email protected]…. email address, which goes to the reception. I don’t really know who to address this email to, as I believe a lot of the time the admin person receiving it ignore and delete it without passing it on to someone relevant, e.g. the school’s or universities’ communications manager. Hope you can help me on this one! Thanks so much in advance! Brian, this is the web page that everybody over the entire Internet was searching for. This page answers the million dollar question! I was particularly interested in the food blogs untapped market, who doesn’t love food. I have been recently sent backwards in the SERP and this page will help immensely. I will subscribe to comments and will be back again for more reference. In 2005, in a pilot study in Pakistan, Structural Deep Democracy, SD2[61][62] was used for leadership selection in a sustainable agriculture group called Contact Youth. SD2 uses PageRank for the processing of the transitive proxy votes, with the additional constraints of mandating at least two initial proxies per voter, and all voters are proxy candidates. More complex variants can be built on top of SD2, such as adding specialist proxies and direct votes for specific issues, but SD2 as the underlying umbrella system, mandates that generalist proxies should always be used. #### The name "PageRank" plays off of the name of developer Larry Page, as well as of the concept of a web page.[15] The word is a trademark of Google, and the PageRank process has been patented (U.S. Patent 6,285,999). However, the patent is assigned to Stanford University and not to Google. Google has exclusive license rights on the patent from Stanford University. The university received 1.8 million shares of Google in exchange for use of the patent; it sold the shares in 2005 for$336 million.[16][17] If the algorithm really works as Matt suggests, no one should use nofollow links internally. I’ll use the example that Matt gave. Suppose you have a home page with ten PR “points.” You have links to five “searchable” pages that people would like to find (and you’d like to get found!), and links to five dull pages with disclaimers, warranty info, log-in information, etc. But, typically, all of the pages will have links in headers and footers back to the home page and other “searchable” pages. So, by using “nofollow” you lose some of the reflected PR points that you’d get if you didn’t use “nofollow.” I understand that there’s a decay factor, but it still seems that you could be leaking points internally by using “nofollow.”
{}
# Why can't a confidence level be 100%? $- \infty \le n \le \infty$ I had wondered that in the past but then you have to think about what is the normal distribution. We know that it covers values from $- \infty \to \infty$. This means no matter how wide your range is constructed you will never include all the possible numbers. If you look at most tables I think they report something like .999... Simply put you can never bee 100% certain you captured the true population value because $- \infty \le n \le \infty$ and any bounded interval will clearly not cover all the possible values. I would argue that being 99% confident seems pretty good to me.
{}
I'm using SageMath 8.2 on a Windows 10 Native with Jupyter Notebook. When trying to download to PDF a Jupyter Notebook, I get the following message: 500 : Internal Server Error nbconvert failed: PDF creating failed, captured latex output: This is XeTeX, Version 3.14159265-2.6-0.99998 (MiKTeX 2.9.6500 64-bit) entering extended mode ! I can't find file /home/danie_000/SageManifolds/test/notebook.tex'. <*> ...e/danie_000/SageManifolds/test/notebook.tex Please type another input file name: ! Emergency stop. <*> ...e/danie_000/SageManifolds/test/notebook.tex No pages of output. Transcript written on texput.log. It is looking for some file in /home/danie_000 but I don't use that default directory to place my Jupyter files. I've changed my fstab like this: D:\Users\danie_000\Sage /home/danie_000 ntfs binary,posix=1,acl 0 0 So I use D:\Users\danie_000\Sage instead, a different directory and drive. If SageMath allows changing the default root directory via changing the fstab file, it should also change any command that needs that information! Daniel edit retag close merge delete Sort by » oldest newest most voted This looks like an issue similar to many other issues (such as https://github.com/sagemath/sage-wind... and some others), where what's happening is Jupyter is searching for an using an external LaTeX distribution in order to do tex->PDF processing. And it is successfully finding your Windows Native MikTeX distribution. However, since Sage for Windows runs under Cygwin, it is passing UNIX-style file paths to the Windows executables provided by MikTeX, and it does not know how to interpret them. I will see if I can get an upstream fix to this in nbconvert, or if not there then at least a patch in Sage. In the meantime you can export to .tex` and then pass it to your favorite LaTeX renderer manually. more I suppose that I'm using SageMath 8.2 on a Windows 10 Native with Jupyter Notebook. means that you have used Erik Bray's Windows/Cygwin installer. Well, Cygwin is pretty peculiar about its file access. If you want to save somewhere on an network drive, you must manage to give Cygwin access to it. Not really obvious. ISTR that I managed to create some sort of symbolic link (Windows ? Cygwin ? Can't remember at the moment). But there is more : The Jupyter notebook can access only the directory it's started from (Edit : and its subdirectories) (in other words, you can't climb uptree...). In your case, that means that you have to start "from" your network drive (bloody unlikely IMHO) or somehow create a symlink from your directory to your network drive. I haven't the slightest idea about whether this is possible in Cygwin or not. Sorry to be unable to be more precise : ATM, I have neither Windows harware connected to a Windows server nor a functional Windows VM.... more
{}
# Math Help - Differential Equations General solution help. 1. ## Differential Equations General solution help. how do i get the general solution of: dy/dx=(y/(sin(y)-x)) thanks 2. Hmm. We have $\dfrac{dy}{dx}=\dfrac{y}{\sin(y)-x}$ $\dfrac{dx}{dy}=\dfrac{\sin(y)-x}{y}$ $\dfrac{dx}{dy}+\dfrac{x}{y}=\dfrac{\sin(y)}{y}.$ This is first-order linear in x(y). 3. so the IF is y? so you get y*(dx/dy)+x=ysiny from that you get: x=(1/y)*siny-cosy 4. Correct IF. However, the RHS is just sin(y), right? That should simplify things a bit for you. 5. ah yes i multiplied the if to the rhs! my bad! thanks very much! 6. You're welcome. Have a good one!
{}
# Tag Info ## New answers tagged binary 0 Martin, I believe that you need to use a floating point convention to represent the decimal number. You can refer to the article https://en.wikipedia.org/wiki/IEEE_floating_point which shows a number of formats. I don't understand the significance of representing the value in octal verses binary because the base you choose to represent the value is ... 0 The range is from $-2^{15}$ to $2^{15}-1$. 0 Refer to the 4-bit example in the above comment. Standard binary to Gray code: $a\mid b\mid c\mid d\qquad\to\qquad p\mid q\mid r\mid s$ where $p=(a)$ $q=(a\not\equiv b)$ $r=(b\not\equiv c)$ $s=(c\not\equiv d)$ $====================$ Gray code to Standard binary: $p\mid q\mid r\mid s\qquad\to\qquad a\mid b\mid c\mid d$ where $a=(p)$ $b=(p\not\equiv q)$ ... 0 When using 2-complementary, the range of 8 bits integer is $-128<= n <= 127$. When you have a number 0xE5, once it is binary form, it is a negative number. The two's complentary of this number is a positive number, and add negative sign to it, that is the decimal number (negative) for 0xE5 for the system using two's complementary. once you flip bits, ... 0 When you subtract two signed numbers, the Carry flag is irrelevant; the Overflow flag is all that counts. The Carry flag is for unsigned integer operations. So there is only ever one bit to worry about. 1 You can find many explanations of the IEEE-754 format on the Web. In short, each 32-bit single-precision floating-point number consists of three parts (we assume that the bits numbering begins from zero): sign, bit 31 exponent, bits 30-23 (eight bits in total) significand (or "mantissa"), bits 22-0 (twenty three bits in total) However, in your case the ... 0 Overflow and carry out are philosophically the same thing. Both indicate that the answer does not fit in the space available. The difference is that carry out applies when you have somewhere else to put it, while overflow is when you do not. As an example, imagine a four bit computer using unsigned binary for addition. If you try to add $1010_2+111_2$ ... 1 As BrianO explains in his answer, any rational number that can be written with a terminating binary expansion has two distinct binary representations: one in which the binary representation terminates in a $1$, and one in which the binary representation terminates in $0\overline{1}$. So the question in the OP reduces to: which rational numbers in $[0,1)$ ... 1 Partial answer: Any nonzero rational in $[0,1)$ with a terminating representation has two binary expansions. Suppose $x\in (0,1)$ is rational, $x\ne 0$, and $$x = 0.d_1 \dotsm d_n$$ for binary digits $d_i$. As $x\ne 0$, we have $d_n = 1$: $$x = 0.d_1 \dotsm d_{n-1} 1$$ But now it's easy to see and show that $$x = 0.d_1 \dotsm d_{n-1} 0 \overline{1} ... 0$$ 0.\overline{0011}=\sum_{k=0}^\infty \left(\frac1{2^3}+\frac1{2^4}\right)\frac1{2^{4k}}=\sum_{k=0}^\infty \frac3{16}\cdot\frac1{16^{k}}=\frac3{16}\cdot\frac1{1-1/16}=\frac15. $$2 You can use the same technique as with repeating decimals:$$\begin{array}{rcrl}x &=& 0.\overline{0011}\\ 10000_2 x &=& 11.\overline{0011}\\ 1111_2x &=& 11.0000\\ x&=&\dfrac{11_2}{1111_2}&=\dfrac{3}{15}=\dfrac15\end{array}$$0 When you represent a number in decimal scientific notation, the base of the exponent is 10, so 1E35=1\cdot 10^{35}. In binary, the base is 2, so you are trying to solve 1\cdot 10^{35}=m2^e where 1 \lt m \lt 10_2 is the mantissa and e is an integer exponent. To find e we can take logs:$$1 \cdot 10^35 = m2^e\\ 35 \log_2(10)=e+ \log_2 ... 0 The probability of a binary word of length $N$ occurring at position $i$ within another binary word of length $M\ge N$, where $1\le i\le M-N+1$, is $$P(N_i)=\frac{1}{2^N}$$ The probability of not finding word $N$ at position $i$ within word $M$ is $$P(\lnot N_i)=1-\frac{1}{2^N}$$ The probability of not finding word $N$ anywhere within word $M$ is $$P(\lnot ... 0 The fact that$$ 2^{10} = 1024 \approx 1000 = 10^3 $$tells you that every ten binary digits need just over three decimal digits. Your question asks about exactly how many more than three (3.32...). The other answers (using logarithms, all good) identify the source of that number. 1 Taken from comments: You have correctly proved that \sum_{i=0}^{n-1} 2^i = 2^n - 1. To complete the proof of the initial claim, you should elaborate on why \sum_{i=0}^{n-1} 2^i is the greatest number you can make with n bits. 2 You can use strong induction. 1=2^03^0 is the base case. Now assume all numbers up through m can be represented. If m+1 is even, take the representation of \frac 12(m+1) and increase all the x's by 1. If m+1 is an odd multiple of 3, take the representation of \frac 13(m+1) and increase all the y's by 1. If m+1 is coprime to 6, ... 1 No, each nonnegative integer has one and only one binary representation. (That is, if we don't care about representations that only differ in how many leading zeroes they have). Suppose you have two different bit strings and we want to prove that they represent different integers. Let's look at the leftmost position where the two bit strings differ; there ... 1 No. If we use n bits to represent unsigned integers in the range 0,1,\ldots, 2^n-1 then each bit pattern corresponds to exactly one of these integers and vice versa. Assume that some integer k\ge 0 allowed two distinct patterns of n bits, where 2^n>k. Among those integers we let k be the minimal one. And among all matching numbers of bits, ... 0 It's just the way the positional system works. For example consider the number 101 (no base mentioned). The positional system works so that each digit in the number have different "weight". If it's base ten the first one has weight hundred, the zero has weight ten and the last one has weight one. The number is then the sum of the digits multiplied with ... 0 In the number system to base n, the number represented by the digits$$a_ra_{r-1}\dots a_1$$is$$a_rn^{r-1}+a_{r-1}n^{r-2}+\dots a_2n+a_1$$If we set a_k=0 except for a_2=1 this sum is equal to n so that 10=n in base n. 1 I'm not sure what you mean. In 'why 10 in any base number system write as 10' what is the supposed base of each '10'...? For example in base 2, 'two' is written as 10, but it is not decimal 'ten'; it's two. We use subscript to denote a base (with a common convention that the base in subscript is in decimal), so: two in binary is 2 = 10_2, ten in decimal ... 1 \color\green1\color\red0\color\red0\color\red0\color\red0\color\red0\color\red0\color\red0\color\green1\color\green1\color\red0.\color\red0\color\green1\color\green1= \color\green1\cdot2^{10}+ \color\red 0\cdot2^{ 9}+ \color\red 0\cdot2^{ 8}+ \color\red 0\cdot2^{ 7}+ \color\red 0\cdot2^{ 6}+ \color\red 0\cdot2^{ 5}+ \color\red ... 2 Imagine you write the n zeros and the n ones along a line. The possible configurations can be created by permuting them in all possible ways. There are (2n)! permutations of 2n digits. However, permutations that only exchange ones among themselves do not produce new configurations. You must discard them. There are n! permutations among the n ... 0 This is the number of trailing zeroes at the end of 5^p/2 (integer division). In mathematical terms, it is the multiplicity of the factor 2 in the prime decomposition of this number. In programming, you will shift right until you get an odd number, and count the shifts. 2 As described in the comments: the answer, let's call it f(n), is one less than the greatest power of 2 which divides 5^n-1. Thus$$f(n)=v_2(5^n-1)-1 where, as usual, $v_2(k)$ denotes the greatest power of $2$ dividing $k$. To compute $v_2(5^n-1)$... first note that this is $2$ if $n$ is odd: indeed $5^n-1=(5-1)(5^{n-1}+\dots+1)$ regardless ... 0 there is nothing wrong, in matlab as in any programming language $\pi$ is represented by a double precision floating number, i.e. a rational number of the form $N 2^{-k}$ where $N,k$ are integers. 1 I always manage to confuse myself with this process since it is not done manually too often, so refer to this handy algorithmic like approach. In both cases, the number we are subtracting is larger in magnitude. $1's$ Complement: Determine the $1's$ complement of the larger number: $00101$ Add the $1's$ complement to the smaller number: \$01001 + 00101 = ... 0 Binary format works: 2^^01001 - 2^^11010 -17 Top 50 recent answers are included
{}
5 # Solve the problem: 1) Suppose there are 6 roads connecting town A to town B and 4 roads connecting town B to town C In how many ways can a person travel from A to Ã... ## Question ###### Solve the problem: 1) Suppose there are 6 roads connecting town A to town B and 4 roads connecting town B to town C In how many ways can a person travel from A to € via B? A) 16 ways B) 10 ways C) 36 ways D) 24 waysFind the indicated probability_ Round your answer t0 6 decimal places when necessary _ 2) A bag contains 6 red marbles, 3 blue marbles, and 5 green marbles If a marble is randomly selected from the bag what is the probability that itis blue? A) 3 B) 5 D)3) Two 6-sided dice are rolle Solve the problem: 1) Suppose there are 6 roads connecting town A to town B and 4 roads connecting town B to town C In how many ways can a person travel from A to € via B? A) 16 ways B) 10 ways C) 36 ways D) 24 ways Find the indicated probability_ Round your answer t0 6 decimal places when necessary _ 2) A bag contains 6 red marbles, 3 blue marbles, and 5 green marbles If a marble is randomly selected from the bag what is the probability that itis blue? A) 3 B) 5 D) 3) Two 6-sided dice are rolled. What is the probability that the sum of the two numbers on the dice will be 52 A) 9 8 5 D) 4 Solve the problem. 4) In a certain town, 5% of people commute to work by bicycle. If a person is selected randomly from the town, what are the odds against selecting someone who commutes by bicycle? A) 19 : 1 B) 1 : 20 19 : 20 D) 1 :19 Determine whether the events A and B are independent: 5) 12 jurors are selected from pool , of 20 Event A: The first person selected is woman Event B: The second person selected is a woman A) Yes B) No Find the indicated probability. Round your answer to 6 decimal places when necessary- 6) An IRS auditor randomly selects 3 tax returns from 49 returns of which contain errors. What is the probability that she selects none of those containing errors? A) 0.6758 B) 0.0011 0.0018 D) 0.6698 7) A family has five children: The probability of having girl is 0.5_ What is the probability of having 3 girls followed by 2 boys? A) 16 32 D) 720 Evaluate the factorial expression: 10! 8! 21 A) 1 B) 10 D) 45 #### Similar Solved Questions ##### M pue ' 1 1 independent U3 ' UOQIBUIQQIOJ _ [ are linear U3 and Uz an n that that In Show Show Let 4.1) 4.2) 4. M pue ' 1 1 independent U3 ' UOQIBUIQQIOJ _ [ are linear U3 and Uz an n that that In Show Show Let 4.1) 4.2) 4.... ##### 003 (part 1 of 2) 10.0 points A 14kg masg weighs 13.3 N on the gurface of a planet similar to Earth: The radius of this planet is roughly 7.7 x 106 m. Calculate the mass of of this planet. The value of the universal gravitational constant is 6.67259 x 10-11 N In2 '/kg? . Answer in units of kg:004 (part 2 of 2) 10.0 pointsCalculate the average density of this planet. Answer in units of kg/u? 003 (part 1 of 2) 10.0 points A 14kg masg weighs 13.3 N on the gurface of a planet similar to Earth: The radius of this planet is roughly 7.7 x 106 m. Calculate the mass of of this planet. The value of the universal gravitational constant is 6.67259 x 10-11 N In2 '/kg? . Answer in units of kg: ... ##### Identify any extrera of the function by recognizing its given form its form after completing the square Verify your results by using the pirtia I derivatives to bocate any ojelacinss and test for relative extrema_ fx, Y) = x2 12y(x,Y, 2) = 3,6,4none of theserelative minimumsaddle point relative maximumSubmit AnswerSave Progress Identify any extrera of the function by recognizing its given form its form after completing the square Verify your results by using the pirtia I derivatives to bocate any ojelacinss and test for relative extrema_ fx, Y) = x2 12y (x,Y, 2) = 3,6,4 none of these relative minimum saddle point relative ... ##### Xe| Collected 0 sample of COa in 3.561 311.64 Danttst Container at K the pressure (eachdd 1p691.3 torr . How) tthi moles OP COa 94S are present in thi= container? Xe| Collected 0 sample of COa in 3.561 311.64 Danttst Container at K the pressure (eachdd 1p691.3 torr . How) tthi moles OP COa 94S are present in thi= container?... ##### Differentiate.flx) =x" 5 e Sx Differentiate. flx) =x" 5 e Sx... ##### Point) Determine the convergence or divergence of the following series_3k + k kl n=IA. convergent B. divergent point) Determine the convergence or divergence of the following series_ 3k + k kl n=I A. convergent B. divergent... ##### We want to use the Alternating Series Test to determine if the series:sincosconverges Or divergesWe can conclude that:The series diverges by the Alternating Series Test: The Alternating Series Test does not apply" because the absolute value of the terms are not decreasing: The series converges by the Alternating Series Test_ The Alternating Series Test does not apply" because the terms of the series do nor alternate The Alternating Series Test does not apply because the absolute value We want to use the Alternating Series Test to determine if the series: sin cos converges Or diverges We can conclude that: The series diverges by the Alternating Series Test: The Alternating Series Test does not apply" because the absolute value of the terms are not decreasing: The series conve... ##### 6.A lead (II) nitrate solution is mixcd with potassium iodide solution - Jied:7.Alithium chloride solution mixcd with calcium sulfide solution fo yicld:8 Aluminum acctatc, commonly Used to rclicvc skin irritation caused by poison ivy. If 0.200 mL of4 0.31 Maluminum acetate solution is rcicted with 450 mL of 2 050 M aminonium phosphate solution, how many grams of precipitate can be produccd? 6.A lead (II) nitrate solution is mixcd with potassium iodide solution - Jied: 7.Alithium chloride solution mixcd with calcium sulfide solution fo yicld: 8 Aluminum acctatc, commonly Used to rclicvc skin irritation caused by poison ivy. If 0.200 mL of4 0.31 Maluminum acetate solution is rcicted with... ##### Based on the ANDVA table given, Is there enaugh evidence at the 001 level of slgnificance conclude that the Iinear relatlonshlp butween the Independent variables and the dependent variable statistically significant?ANOVASourceSlgnlficance = 1190.057227 396.685742 10.697317 0.005247Reg-esslon Residual259.57913737.082734Tota1449.636364Copy Datoenawer 22 PolntsTablas Kcypad Keyboard Shortcuts Based on the ANDVA table given, Is there enaugh evidence at the 001 level of slgnificance conclude that the Iinear relatlonshlp butween the Independent variables and the dependent variable statistically significant? ANOVA Source Slgnlficance = 1190.057227 396.685742 10.697317 0.005247 Reg-esslon Res... ##### About ___________ percent of the population experience sexual fantasies.a. 95b. 68c. 50d. 35e. 20 About ___________ percent of the population experience sexual fantasies. a. 95 b. 68 c. 50 d. 35 e. 20... ##### EvCludte #heTn (7+3) d(2+2x tA)dx (k+4)54(y2+25)2 EvCludte #he Tn (7+3) d (2+2x tA)dx (k+4)5 4 (y2+25)2... ##### (IMAGE CAN'T COPY). Given: DUMA is a rhombus with diagonals $\overline{\mathrm{DM}}$ and $\overline{\mathrm{UA}}$. Prove: $\Varangle 1$ and $\Varangle 2$ are complementary. (IMAGE CAN'T COPY). Given: DUMA is a rhombus with diagonals $\overline{\mathrm{DM}}$ and $\overline{\mathrm{UA}}$. Prove: $\Varangle 1$ and $\Varangle 2$ are complementary.... ##### Show analytically that the image formed by a converging lens is real and inverted if the object is beyond the focal point $\left(d_{0}>f\right),$ and is virtual and upright if the object is within the focal point $\left(d_{0}<f\right) .$ Describe the image if the object is itself an image, formed by another lens, so its position is beyond the lens, for which $-d_{0}>f,$ and for which $0<-d_{0}<f$ Show analytically that the image formed by a converging lens is real and inverted if the object is beyond the focal point $\left(d_{0}>f\right),$ and is virtual and upright if the object is within the focal point $\left(d_{0}<f\right) .$ Describe the image if the object is itself an image, for... ##### As the owner of an American-based t-shirt distribution business,you have an opportunity to acquire a competing t-shirtmanufacturing facility based in Japan. The acquisition priceis below book value. After completing some marketing research, youmust make a recommendation to your management team in favor ofacquisition or against acquisition based on your assessment of thepolitical, legal, and economic climate in your chosencountry. I'm confused as to where to start the research. I'm havi As the owner of an American-based t-shirt distribution business, you have an opportunity to acquire a competing t-shirt manufacturing facility based in Japan. The acquisition price is below book value. After completing some marketing research, you must make a recommendation to your management team i... ##### An snowball was brought indoors and it began to melt. It's volume decayed exponentially. After 3 minutes the snowball was 30 cubic centimeters. Two minutes later (5 total minutes), the volume was down to 15 cubic centimeters. Which of the following is the closest appooximation of the volume of the snowball when it began to melt?• 75 cubic centimeters • 85 cubic centimeters • 70 cubic centimeters • 90 cubic centimeters • 80 cubic centimeters An snowball was brought indoors and it began to melt. It's volume decayed exponentially. After 3 minutes the snowball was 30 cubic centimeters. Two minutes later (5 total minutes), the volume was down to 15 cubic centimeters. Which of the following is the closest appooximation of the volume... ##### Point)Let f (T,9,2) Ty" 22 and88,U 82t3 , andCalculate the primary derivatives8 2 (b) Calculate8 (c) Use the Chain Rule t0 computeO5 In (c) express your answer In terms of the independent variables point) Let f (T,9,2) Ty" 22 and 88,U 82t3 , and Calculate the primary derivatives 8 2 (b) Calculate 8 (c) Use the Chain Rule t0 compute O5 In (c) express your answer In terms of the independent variables...
{}
# Static equilibrium problems 1. Oct 19, 2006 ### laura001 hey every1... i'm stuck on this problem and unless i get it and a few others right im gonna fail my engineering statics course :( im desperate! I study aerospace engineering but because i've been ill i missed a few classes and i really dont have time to learn the theory to answer these... thank you for your time. Here's the diagram for the 1st Q: http://img383.imageshack.us/my.php?image=q1cm6.jpg The Q that goes with it is: Determine the bending moment at a distance x from the roller support on the left hand side, using the principles of virtual work. The values are: F = 2kn, d=2.200m and x=0.600m 2. Oct 19, 2006 ### laura001 2nd Question: http://img407.imageshack.us/my.php?image=q2un0.jpg Each of the two uniform hinged bars have a mass m and length L. Both bars are connected by hinge S and supported as shown. The structure is loaded by a mass of 4m, applied at the hinge S. Gravitational acceleration is g = 9.81m/s2. m= 15kg L=5.500m angle theta= 130 degrees Calculate the torque Ma (in Nm) required for equilibrium 3. Oct 19, 2006 Working on it. 4. Oct 19, 2006 ### laura001 5. Oct 19, 2006 http://usera.imagecave.com/polkijuhzu322/system1.bmp.jpg Ok, as said, the system is replaced with a mechanism, where a hinge is put in the place where you have to find the moment, and there is a couple of moments M added to that same place. Now, you have to construct an initial relative rotation between the two diskd connected by the hinge, so do it as is done in the displacement sketch. Now all you have to do is apply the principle of virtual work to get the moment M: $$F\cdot d_{F}+M(A_{1}+A_{2}) = 0$$, where A1 and A2 are the angles of rotation of the two disks. Last edited: Oct 19, 2006 6. Oct 19, 2006 Regarding the second assignment, replace the system with a system where the force G = mg acts in the center of each bar, and where G' = 4mg acts on the hinge. Try this: $$\sum M_{(A)}=0 \Rightarrow R_{B} = \cdots ; \sum F_{yi}=0 \Rightarrow R_{A} = \cdots ; \sum M_{(B)}=0 \Rightarrow M_{A} = \cdots$$ I hope this works. 7. Oct 19, 2006 ### laura001 thanks for the effort, but i dont fully 'get it' :( so what i have so far is: F . df + M(A1 + A2)=0 which i think is 2 X df + M(40.36 + 8.53) = 0 ok and using trigonometry i got that df is 0.606711 m, so i get that M = 0.024819 kNm... is that right? Last edited: Oct 19, 2006 8. Oct 19, 2006 M should equal -0.54 kNm, I checked on it. You obviously didn't do the trigonometry right. $$\frac{0.6}{4}=\frac{d_{F}}{4-2.2}$$, $$A_{1} = \tan A_{1} = \frac{3.4}{4}$$, and $$A_{2} = \tan A_{2} = \frac{0.6}{4}$$. I forgot to point this out - A1 and A2 are differential rotations, so in the theory of small displacements you can use the identity $$\alpha \approx \tan(\alpha)$$. 9. Oct 19, 2006 ### laura001 double post Last edited: Oct 19, 2006 10. Oct 19, 2006 ### laura001 yea i made a mistake with the trigonometry, am gonna try the 2nd Q now but i dont have much confidence that i'll get the right answer. pleaseee stick around to correct me im certain i'll make a mistake, thx again. ok so i drew a free body diagram of the system, with mg acting downwards on the centre of each member, and 4mg acting downwards on hinge A. Then i calculated the vertical reaction force at B from moment equilibrium about point A, and got that Bv = 73.582kN then summing forces in the y direction gives that Av = 809.318kN and summing forces in the x direction gives that Ax = 0. But now im not sure what to do! lol.... i know all the forces but am not sure how to calculate the torque at Ma... i mean, i think i can do it without using virtual work... by summing all the moments about point A, and then the reaction moment would be negative of that right.... but have no idea how to do it with virtual work, the idea of displacements confuses me. do u have maybe msn or yahoo radou? Last edited: Oct 19, 2006 11. Oct 19, 2006 P.S. If you're certain you'll make a mistake, then you will make a mistake. Conslusion: don't be certain you'll make a mistake. 12. Oct 19, 2006 ### laura001 is there anybody, out there? (pink floyd) 13. Oct 19, 2006 Unfortunately, no msn neither yahoo. Btw, it doesn't say anywhere that you have to use virtual work in assignment 2. Your way of thinking about 2 seems correct. 14. Oct 19, 2006 ### laura001 hey are u still out there radou? could u possibily help me on another couple questions? :!!) 15. Oct 19, 2006 As long as I'm online, I'll help. 16. Oct 19, 2006 ### laura001 thankyou! btw, i know this will sound rude as some1 can sound... but i have 8 more Q's and if i dont get them all right i really am gonna fail this course :( could i maybe just have the answers? believe me when i say that, i will learn the theory behind all of this stuff but i reallly am in need of nothing less than a miracle just now... 17. Oct 19, 2006 Depends on how big the questions are. Btw, answers won't help you if you don't understand anything. But nevermind, let's give it a try. 18. Oct 19, 2006 ### laura001 ok this is actually Q11 http://img120.imageshack.us/my.php?image=qmc9.jpg a= 0.200m b=2m theta= 35 degrees Determind the torque M (in kNm) on the activating lever of the dump truck necessary to balance the load at the given dump angle theta, g = 9.81m/s2. Last edited: Oct 19, 2006 19. Oct 19, 2006 ### laura001 20. Oct 19, 2006
{}
Please use this identifier to cite or link to this item: http://repository.aaup.edu/jspui/handle/123456789/1299 Title: Monotonicity Analysis of Fractional Proportional Differences Authors: Suwan, Iyad$AAUP$PalestinianOweis, Shahd$AAUP$PalestinianAbusaa, Muayad$AAUP$PalestinianAbdljawad, Thabet$Other$Other Keywords: Riemann-Liouville(RL) fractional proportional di erenceCaputo fractional proportional di erencefractional proportional Mean Value Theorem(MVT) Issue Date: 1-May-2020 Publisher: Hindawi Series/Report no.: 4867927; Abstract: In this work, the nabla discrete new Riemann-Liouville and Caputo fractional proportional differences of order $0<\varepsilon<1$ on the time scale $\mathbb{Z}$ are formulated. The differences and summations of discrete fractional proportional are detected on $\mathbb{Z}$, and the fractional proportional sums associated to $\left( ^{R} _{c} \nabla ^{\varepsilon , \rho} \chi \right)(z)$ with order $0<\varepsilon<1$ are defined. The relation between nabla Riemann-Liouville and Caputo fractional proportional differences is derived. The monotonicity results for the nabla Caputo fractional proportional difference are proved; specifically, if $( _{c-1} ^{R} \nabla ^{\varepsilon , \rho} \chi )(z) > 0$ then $\chi(z)$ is $\varepsilon \rho \ -$increasing, and if $\chi(z)$ is strictly increasing on $\mathbb{N}_{c}$ and $\chi(c)>0$, then ($_{c-1} ^{R} \nabla ^{\varepsilon , \rho } \chi )(z) > 0$. As an application of our findings, a new version of fractional proportional difference of the Mean Value Theorem(MVT) on $\mathbb{Z}$ is proved. URI: http://repository.aaup.edu/jspui/handle/123456789/1299 ISSN: https://doi.org/10.1155/2020/4867927 Appears in Collections: Faculty & Staff Scientific Research publications Files in This Item: File Description SizeFormat
{}
Lemma 15.48.3. Let $(R, \mathfrak m, \kappa )$ be a regular local ring. Let $m \geq 1$. Let $f_1, \ldots , f_ m \in \mathfrak m$. Assume there exist derivations $D_1, \ldots , D_ m : R \to R$ such that $\det _{1 \leq i, j \leq m}(D_ i(f_ j))$ is a unit of $R$. Then $R/(f_1, \ldots , f_ m)$ is regular and $f_1, \ldots , f_ m$ is a regular sequence. Proof. It suffices to prove that $f_1, \ldots , f_ m$ are $\kappa$-linearly independent in $\mathfrak m/\mathfrak m^2$, see Algebra, Lemma 10.106.3. However, if there is a nontrivial linear relation the we get $\sum a_ i f_ i \in \mathfrak m^2$ for some $a_ i \in R$ but not all $a_ i \in \mathfrak m$. Observe that $D_ i(\mathfrak m^2) \subset \mathfrak m$ and $D_ i(a_ j f_ j) \equiv a_ j D_ i(f_ j) \bmod \mathfrak m$ by the Leibniz rule for derivations. Hence this would imply $\sum a_ j D_ i(f_ j) \in \mathfrak m$ which would contradict the assumption on the determinant. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
{}
A given fraction and the fraction obtained by multiplying (or dividing) both its numerator and denominator by the same non-zero number are called equivalent fractions. $\frac{1}{2}$ , $\frac{2}{4}$ , $\frac{3}{6}$ are examples of equivalent fractions. Equivalent fractions are found by multiplying both the numerator and the denominator by the same number. Equivalent fractions can also be found by dividing both the numerator and the denominator by the same number. ## What is an Equivalent Fraction? Look at the representation of a few fractions: The shaded areas in these figures represent the fractions, $\frac{1}{2}$ , $\frac{2}{4}$ , $\frac{3}{6}$ respectively. If we place one diagram over the other, they are found to be equal. That is, the fractions representing these shaded areas are equal. Such fractions are called as equivalent fractions. ### Facts about Equivalent Fractions: If two fractions are equivalent, then the product of the numerator of the first and the denominator of the second is equal to the product of the numerator of second and the denominator of the first fraction. In other words, if $\frac{a}{b}$ = $\frac{c}{d}$ , then we can say that ad = bc. ### Like Fractions Equivalence Relations Equivalent Decimals What are Like Fractions? What is a Fraction? Improper Fraction to Mixed Fraction Add Fraction Complex Fraction Derivative of a Fraction Derivative of Fractions Divide Fractions Fraction Exponents How do I Reduce a Fraction Calculator for Equivalent Fractions Equivalent Ratios Calculator A Fraction Calculator
{}
# Monomials and Operations on them Monomial is caled such expression, that contains the numbers, natural powers of variables and their products and doesn′t contain any other operations on numbers and variables. For example, 3a*(2.5a^3), (5ab^2)*(0.4c^3d), x^2y*(-2z)*0.85 - monomials, whereas the expressions a+b, (ab)/c aren′t monomials. Any monomial can be transformed to the standard form, i.e. to represent as product of numerical multiplier, that stay at the first place and powers of the different variables. The numerical multiplier of monomial, that written in the standard form, is called coefficient of monomial. The sum of exponents of all variables is called the power of monomial. If between two monomials to write multiplication sign, then we will obtain monomial, that is called the product of given monomials. When we raise the monomial to the natural power we also obtain monomial. The result we usually transform to the standard form. The transformation of monomial to the standard form, the multiplication of monomials is identical transformation. Example 1. We should transform the monomial 3a*(2.5a^3) to the standard form: 3a*(2.5a^3)=(3*2.5)*(a*a^3)=7.5a^4. Example 2. We should multiply monomials 24ab^2cd^3 and 0.3a^2b^3c. 24ab^2cd^3*(0.3a^2b^3c)=(24*0.3)*(a*a^2)*(b^2*b^3)*(c*c)*d^3=7.2a^3b^5c^2d^3 . Example 3. We should raise the monomial (-3ab^2c^3) to the fourth power. (-3ab^2c^3)^4=(-3)^4*a^4*(b^2)^4*(c^3)^4=81a^4b^8c^12 . The monomials, that are transformed to the standard form are called similar, if they are differ only by coefficients or dont′t differ. The similar monomials we can add and subtract, and as result we obrain monomial again, that is similar to original (sometimes we obtain 0). Addition and subtraction of similar monomials are called transformation of similar terms. Example 4. We should add 5x^2yz^3 and -8x^2yz^3 . 5x^2yz^3+(-8x^2yz^3)=(5+(-8))x^2yz^3=-3x^2yz^3 .
{}
# distribution theory related issues & queries in MathXchanger ## whats the probability? Updated July 15, 2018 20:20 PM ## The orbit of $2^n+1$ Updated July 15, 2018 19:20 PM ## The scattering map Updated July 15, 2018 16:20 PM Showing Page 1 of 0
{}
# Reaction Mechanism Rules and Patterns Reaction mechanisms show the mechanistic steps required to convert reactants into products. A reaction mechanism is the exact step or steps required to convert reactants into products. Mechanisms explain how chemical reactions proceed. Mechanisms consist of individual steps called mechanistic steps. A mechanistic step is a step in a reaction mechanism. The four mechanistic steps are proton transfer, nucleophilic attack, loss of a leaving group, and rearrangement. ### Recognizing the Four Main Mechanistic Steps There are four main steps in mechanisms: proton transfer, nucleophilic attack, loss of leaving group, and rearrangement. The proton (H+) transfer step is a rapid step, especially when the proton is transferred from an acidic position to a basic position. A solvent is often used to aid proton transfer. Proton transfers involve sigma bonds breaking and sigma bonds forming. Sigma bonds are bonds formed when two orbitals overlap end to end. In the presence of water, a proton will transfer from sulfuric acid to the water molecule, creating the positively charged hydronium ion. Although acids are defined as proton donors, the mechanism involves the base attacking and removing the hydrogen from the acid. The hydrogen does not attack or move to the base. A nucleophilic attack is a step where the nucleophile, which is rich in electrons, will attack an electrophile, which is electron deficient. The lone electron pair of a nucleophile, such as a bromide ion, will attack an electrophilic carbon, forming a new covalent bond. The carbon is electron deficient because it is next to an electronegative oxygen atom. This nucleophilic attack is often paired with another mechanistic step, such as the loss of a leaving group. The loss of leaving group step occurs often but not always with a nucleophilic attack in one mechanistic step. As the nucleophile is attacking an electrophilic atom, the atom has to lose a leaving group to avoid violating the octet rule. In certain reactions, the loss of leaving group can precede the nucleophilic attack. The leaving group is a functional group that is able to leave a compound and usually forms a stable (weak) species. Halogens, such as bromine, chlorine, and iodine, make very good leaving groups, as do hydronium ions, mesylates, tosylates, and triflates. A good leaving group is a group that forms a very stable, weak conjugate base when it leaves. Whether initiated by the nucleophilic attack or on its own, leaving groups will leave the substrate based on their ability to form stable species. The more stable species the leaving group forms, the more likely the leaving group is to leave. The last of the four mechanistic steps is the rearrangement step. Rearrangement steps are very rare compared to the other three steps. Rearrangement steps are often intermediate steps in a mechanism where a positively charged carbocation is formed. Any time a carbocation forms, a rearrangement may occur. A carbocation is a positively charged carbon with three bonds and no lone pairs. Carbocations will always rearrange if they can form a more stable species. Carbocations are classified as primary, secondary, or tertiary based on the number of alkyl groups bonded to the positively charged carbon. Primary carbocations are bonded to one alkyl group and two hydrogens, secondary carbocations are bonded to two alkyl groups and one hydrogen, and tertiary carbocations are bonded to three alkyl groups. Tertiary carbocations are more stable than secondary carbocations, which are more stable than primary carbocations, due to the presence of alkyl groups, which donate electron density to stabilize the carbocation. ### Combining Mechanistic Steps Mechanisms are combinations of the four main mechanistic steps to show how reactants are converted into products. Many mechanisms are reversible, meaning they can go from reactants to products or from products to reactants. A reaction takes place in many stages, or steps. Proton transfer, nucleophilic attack, loss of a leaving group, and carbocation rearrangements are intermediate steps in the overall mechanism of a reaction. These steps can occur in any order, and often a mechanism will involve one of these steps repeating multiple times. Many mechanisms have multiple proton transfer steps within the overall mechanism. Many mechanistic steps are reversible and exist in a state of equilibrium. Most elementary steps are reversible, meaning the product or products revert back into the reactants. Reversible steps may be shown with a double-headed arrow or two arrows pointing in opposite directions. In the Haber-Bosch process for production of ammonia from nitrogen gas and hydrogen gas, the reaction is reversible, and conditions need to be adjusted to make this reaction produce ammonia. \color{#c42126}\begin{aligned}{\rm {N}_2+3{H}}_2\rightarrow2{\rm{NH}}_3\begin{aligned}\;\;&{\text {Exothermic}}\\&(\text{Gives out heat energy})\end{aligned}\end{aligned} \color{#0047af}\begin{aligned}\rm{N}_2{+}3{H}_2\leftarrow2\rm{NH}_3\;\;\begin{aligned}&{\text {Endothermic}}\\&(\text{Takes in heat energy})\end{aligned}\end{aligned} $\rm {N}_2{+}3{H}_2\overset{\color{#c42126}\text{Exothermic}}{\underset{\color{#0047af}\text{Endothermic}}{\color{#c42126}\rightleftharpoons}}2{\rm {N}{H}}_3$ ### Summary of the Mechanistic Steps in Organic Reactions Mechanistic Step Always Used In May Be Used In Arrows Other Notes Proton transfer Acid-base reactions Addition and elimination 2 arrows: 1st arrow: from lone pair (or bond) of base to hydrogen 2nd arrow: from hydrogen-atom bond to atom (atom is usually carbon, oxygen, nitrogen, or halogen) Proton transfer steps can occur in any reaction type. Loss of leaving group Addition and elimination reactions Acid-base reactions 1 arrow: from bond between leaving group and atom (usually carbon) to the leaving group Loss of leaving group often occurs in the same step as nucleophilic attack. Nucleophilic attack Acid-base, addition, and elimination reactions Used in all ionic reactions 1 arrow: from lone pair of nucleophile to electrophilic atom (usually carbon) Nucleophilic attack often occurs in the same step as loss of leaving group. Rearrangement None Any reaction where a carbocation forms 1 arrow: from methyl or hydrogen bond to carbocation Rearrangements occur when a carbocation can rearrange to a more stable carbocation. The table contains a summary of each mechanistic step and what reactions it is always used in or sometimes used in and the reaction arrows associate with each step. Radical reactions have a different set of mechanistic steps than the ones described in this table.
{}
# Diameter of a graph consisting of Hamilton cycles Imagine an undirected graph $G = (V,E)$ with $|V| = n$ nodes. Its unweighted edges $E$ are the union of $h$ random Hamiltonian cycles through all nodes, each generated iid uniformly at random from the set of all Hamiltonian cycles. What is the expected diameter $D$ of $G$? The case $h=1$ is trivial and not interesting. Clearly, $D$ grows strictly monotonically with $n$ as well as with $h^{-1}$. However, I'm not sure of the exact relationship of these variables. I suspect a relationship along the lines of $D = O(\log(n)/h)$. ## migrated from cs.stackexchange.comMar 28 '17 at 20:39 This question came from our site for students, researchers and practitioners of computer science. • You need to clarify what exactly you mean by $O$ here: is $h$ fixed? If not, how does it grow in relation to $n$? "True" two-parameter Landau symbol are notoriously awkward. – Raphael Mar 29 '17 at 6:44 At least for $h$ constant, taking the union of $h$ uniformly random Hamiltonian cycles is maybe kind of equivalent to taking a uniformly random $2h$-regular graph, whose properties as $n \to\infty$ we know quite well. One result in this direction is the following. Let $\mathcal G_{n,d}$ denote the uniform probability space over random $d$-regular graph on $n$ vertices. By a result of Kim and Wormald, we have: If $d\ge4$ is even, then $G \in \mathcal G_{n,d}$ a.a.s. (asymptotically almost surely) has a complete Hamiltonian decomposition. In other words, with probability tending to $1$ as $n \to\infty$, a uniformly random $2h$-regular graph is the union of $h$ edge-disjoint Hamiltonian cycles. Of course, if we just take $h$ uniformly random Hamiltonian cycles, they will probably not be disjoint. But they are not too far off either. If $X_{ij}$ is the number of cycles shared between the $i$-th and $j$-th Hamiltonian cycle, then $X_{ij} \sim \operatorname{Poisson}(2)$. So as long as $h$ is constant, the number of overlapping edges is $O(1)$ a.a.s., and with constant probability there are none. Another reason not to care about the overlaps is that I'm pretty sure that a different result is also true: if $\mathcal G_{n,d}'$ is the corresponding probability space to $\mathcal G_{n,d}$ of $d$-regular loopless multigraphs (allowing parallel edges), then for even $d$, $G \in \mathcal G_{n,d}'$ a.a.s. has a decomposition into Hamiltonian cycles that are no longer edge-disjoint. (The paper above mentions this for $d=4$, but doesn't say anything one way or the other about larger $d$; I think the same methods would solve that problem.) Since all unions of $h$ Hamiltonian cycles are equally probable outcomes of sampling from $\mathcal G_{n,2h}'$, this would tell us at results true a.a.s. of $\mathcal G_{n,2h}'$ are also true a.a.s. of this random graph model. This is nice, because many proofs about $\mathcal G_{n,d}$ go through multigraphs first anyway, and then take into account the probability that the graph is simple. In particular, this is true of the result below. A result of Bollobás and de la Vega gets the following bounds on the diameter of $\mathcal G_{n,r}$ (switching notation, they use $r$ for degree): Theorem 1. Let $r \ge 3$ and $\epsilon>0$ be fixed and define $d=d(n)$ as the least integer satisfying $$(r-1)^{d-1} \ge (2+\epsilon) rn \log n.$$ Then a.e. $r$-regular graph has diameter at most $d$. Theorem 3. The diameter of a.e. $r$-regular graph of order $n$ is at least $$\lfloor \log_{r-1} n\rfloor + \left\lfloor\log_{r-1} \log n - \log_{r-1}\frac{6r}{r-2} \right\rfloor + 1.$$ Set $r = 2h$ and that's that. Purely heuristically, I expect the answer to be $O(\log(n)/\log(h))$. Why? We can imagine that each vertex has an edge to $2h$ randomly chosen other vertices. Then heuristically we can imagine that there are about $(2h)^d$ vertices at distance $\le d$ from a fixed vertex $v$ (as long as $(2h)^d$ is small compared to $n$). Thus if $(2h)^d \approx n$, we can expect that any fixed pair of vertices $v,w$ are likely connected by some path of length $\le d$. This equation is satisfied when $d \approx \log_{2h}(n) \sim \log(n)/\log(h)$. When $d$ is a small constant factor larger than that, we can heuristically expect there to be an overwhelming probability that any fixed pair of vertices $v,w$ are connected by a path of length $\le d$. Taking a union bound over all pairs of vertices, we can expect that there is $d=O(\log(n)/\log(h))$ such that with overwhelming probability the diameter will be $\le d$. This is not a proof -- this is just a hand-wavy back-of-the-envelope heuristic estimate.
{}
# 2C3H8 +10O2 ⟶ 6CO2 + 8H2OBased on the above balanced equation for the combustion ofpropane with oxygen, how many ###### Question: 2C3H8 + 10O2 ⟶ 6CO2 + 8H2O Based on the above balanced equation for the combustion of propane with oxygen, how many moles of O2 are needed to produce 35.0 g CO2 (in mol)? Group of answer choices 1.33 0.488 21.1 58.5 44.0 #### Similar Solved Questions ##### Fully explain the process of DNA replication: You may base your answer on the figure of the replication fork Iri your narrative you must address the following points.Identify and explain the differences between the leading strand and the lagging strandWhat determines the direction of replication3 All of the enzymes involved in replication must be shown and their function explainedExplain the chemical process of adding nucleotides to a growing strand in replication5. Explain the formation of the Fully explain the process of DNA replication: You may base your answer on the figure of the replication fork Iri your narrative you must address the following points. Identify and explain the differences between the leading strand and the lagging strand What determines the direction of replication 3... ##### 36_ Function delined as an integral Write the integral that gives the length of the curve "=fk) sin t dt on the interval [0, ]; 36_ Function delined as an integral Write the integral that gives the length of the curve "=fk) sin t dt on the interval [0, ];... ##### Gage Corporation has two operating divisions in a semiautonomous organizational structure. Adams Division, located in the... Gage Corporation has two operating divisions in a semiautonomous organizational structure. Adams Division, located in the United States, produces a specialized electrical component that is an input to Bute Division, located in the south of England. Adams uses idle capacity to produce the component, ... ##### E. 33 J 22. What is the change in internal energy of the system (AU)if I1... e. 33 J 22. What is the change in internal energy of the system (AU)if I1 k of heat energy is evolved by the system and 7 kJ of work is done on the system for a certain process? 23. I g -75 kJ and w 62 kJ for a certain process, that process a. requires a catalyst. b. is endothermic c. occurs slowly ... ##### EXPERIMENT 3: What laboratory conditions (temperature and pressure) were assumed when calculating Ksp based on measured... EXPERIMENT 3: What laboratory conditions (temperature and pressure) were assumed when calculating Ksp based on measured electrochemical potential?... ##### The last question Cooling Your Leftovers Due on Saturday, Dec 7 at 11 59 pm (EST)... the last question Cooling Your Leftovers Due on Saturday, Dec 7 at 11 59 pm (EST) You place your lunch leftovers in the refrigerater Suppose the refrigerator needs te remove 1.0730+43 of thermal energy from your lunch to cool it to the temperature of the inside of the refrigerator. In the meantime,...
{}
# Thickness from object to lower boundary of bedding layer Thickness of the lower part of the backfill area around the cables in a multi-layer backfill arrangement. Symbol $s_{4}$ Unit m$^2$ Formulae $H_{T} - H_{c}$ Related $H_{c}$ Used in $R_{q21}$ $R_{q22}$ $R_{q31}$
{}
Next: Numerical Stability and Conditioning Up: A Brief Tour of Previous: A Brief Tour of   Contents   Index # Introduction Let be a square by matrix, a nonzero by 1 vector (a column vector), and a scalar, such that (1) Then is called an eigenvalue of , and is called a (right) eigenvector. Sometimes we refer to as an eigenpair. We refer the reader to the list of symbols and acronyms on page ; we will use the notation listed there freely throughout the text. In this chapter we introduce and classify all the eigenproblems discussed in this book, describing their basic mathematical properties and interrelationships. Eigenproblems can be defined by a single square matrix as in (2.1), by a nonsquare matrix , by 2 or more square or rectangular matrices, or even by a matrix function of . We use the word eigenproblem'' in order to encompass computing eigenvalues, eigenvectors, Schur decompositions, condition numbers of eigenvalues, singular value and vectors, and yet other terms to be defined below. After reading this chapter the reader should be able to recognize the mathematical types of many eigenproblems, which are essential to picking the most effective algorithms. Not recognizing the right mathematical type of an eigenproblem can lead to using an algorithm that might not work at all or that might take orders of magnitude more time and space than a more specialized algorithm. To illustrate the sources and interrelationships of these eigenproblems, we have a set of related examples for each one. The sections of this chapter are organized to correspond to the next six chapters of this book: Section 2.2: Hermitian eigenproblems (HEP) (Chapter 4). This corresponds to as in (2.1), where is Hermitian, i.e., . Section 2.3: Generalized Hermitian eigenproblems (GHEP) (Chapter 5). This corresponds to , where and are Hermitian and is positive definite (has all positive eigenvalues). Section 2.4: Singular value decomposition (SVD) (Chapter 6). Given any rectangular matrix , this corresponds to finding the eigenvalues and eigenvectors of the Hermitian matrices and . Section 2.5: Non-Hermitian eigenproblems (NHEP) (Chapter 7). This corresponds to as in (2.1), where is square but otherwise general. Section 2.6: Generalized non-Hermitian eigenproblems (GNHEP) (Chapter 8). This corresponds to . We will first treat the most common case of the regular generalized eigenproblem, which occurs when and are square and is nonsingular for some choice of scalars and . We will also discuss the singular case. Section 2.7: Nonlinear eigenproblems (Chapter 9). The simplest case of this is the quadratic eigenvalue problem and includes higher degree polynomials as well. We also discuss maximizing a real function over the space of by orthonormal matrices; this includes eigenproblems as a special case as well as much more complicated problems such as simultaneously reducing two or more symmetric matrices to diagonal form as nearly as possible using the same set of approximate eigenvectors for all of them. Bai's note: this section is not necessary, there are not much we can/need to say. in the singular case is discussed in Chap 8. [Sec ] More generalized eigenproblems (Chapter 9). This chapter includes several cases. First, we discuss in the singular case, i.e. when the eigenvalues are not continuous functions. Second we discuss polynomial eigenproblems . When we get which corresponds to cases considered before. Third, we consider the fully nonlinear case , where can depend on in any continuous way. When is a polynomial in we get the previous case. All the eigenproblems described above arise naturally in applications arising in science and engineering. In each section we also show how one can recognize and solve closely related eigenproblems (for example, GEHPs, where is positive definite instead of ). Chapters are presented in roughly increasing order of generality and complexity. For example, the HEP is clearly a special case of the GHEP , because we can set . It is also a special case of the NHEP, because we can ignore 's Hermitian symmetry and treat it as a general matrix. In general, the larger or more difficult an eigenvalue problem, the more important it is to use an algorithm that exploits as much of its mathematical structure as possible (such as symmetry or sparsity). For example, one can use algorithms for non-Hermitian problems to treat Hermitian ones, but the price is a large increase in time, storage, and possibly lower accuracy. Each section from 2.2 through 2.6 is organized as follows. 1. The basic definitions of eigenvalues and eigenvectors will be given. 2. Eigenspaces will be defined. A subspace is defined as the space spanned by a chosen set of vectors ; i.e., is the set of all linear combinations of . Eigenspaces are (typically) spanned by a subset of eigenvectors and may be called invariant subspaces, deflating subspaces, or something else depending on the type of eigenproblem. 3. Equivalences will be defined; these are transformations (such as changing to ) that leave the eigenvalues unchanged and can be used to compute a simpler representation'' of the eigenproblem. Depending on the situation, equivalences are also called similarities or congruences. 4. Eigendecompositions will be defined; these are commonly computed simpler representations.'' 5. Conditioning will be discussed. A condition number measures how sensitive the eigenvalues and eigenspaces of are to small changes in . These small changes could arise from roundoff or other unavoidable approximations made by the algorithm, or from uncertainty in the entries of . One can get error bounds on computed eigenvalues and eigenspaces by multiplying their condition numbers by a bound on the change in . For more details on how condition numbers are used to get error bounds, see §2.1.1. An eigenvalue or eigenspace is called well-conditioned if its error bound is acceptably small for the user (this obviously depends on the user), and ill-conditioned if it is much larger. Conditioning is important not just to interpret the computed results of an algorithm, but to choose the information to be computed. For example, different representations of the same eigenspace may have very different condition numbers, and it is often better to compute the better conditioned representation. Conditioning is discussed in more detail in each chapter, but the general results are summarized here. 6. Different ways of specifying an eigenproblem are listed. The most expensive eigenvalue problem is to ask for all eigenvalues and eigenvectors of . Since this is often too expensive in time and space, users frequently ask for less information, such as the largest 10 eigenvalues and perhaps their eigenvectors. (Note that if is sparse, typically the eigenvectors are dense, so storing all the eigenvectors can take much more memory than storing .) Also, some eigenproblems for the same matrix may be much better conditioned than others, and these may be preferable to compute. 7. Related eigenproblems are discussed. For example, if it is possible to convert an eigenproblem into a simpler and cheaper special case, this is shown. 8. The vibrational analysis of the mass-spring system shown in Figure 2.1 is used to illustrate the source and formulation of each eigenproblem. Newton's law applied to this vibrating mass-spring system yields where the first term on the left-hand side is the force on mass from spring , the second term is the force on mass from spring , and the third term is the force on mass from damper . In matrix form, these equations can be written as where , , and We assume all the masses are positive. is called the mass matrix, is the damping matrix, and is the stiffness matrix. All three matrices are symmetric. They are also positive definite (have all positive eigenvalues) when the , , and are positive, respectively. This differential equation becomes an eigenvalue problem by seeking solutions of the form , where is a constant scalar and is a constant vector, both of which are determined by solving appropriate eigenproblems. Electrical engineers analyzing linear circuits arrive at an analogous equation by applying Kirchoff's and related laws instead of Newton's law. In this case represents branch currents, represent inductances, represents resistances, and represents admittances (reciprocal capacitances). Chapter 9 on nonlinear eigenproblems is organized differently, according to the structure of the specific nonlinear problems discussed. Finally, Chapters 10 and 11 treat issues common to many or all of the above eigenvalue problems. Chapter 10 treats data structures, algorithms, and software for sparse matrices, especially sparse linear solvers, which often are the most time-consuming part of an eigenvalue algorithm. Chapter 11 treats preconditioning techniques or methods for converting an eigenproblem into a simpler one. Some preconditioning techniques are well established; others are a matter of current research. Subsections Next: Numerical Stability and Conditioning Up: A Brief Tour of Previous: A Brief Tour of   Contents   Index Susan Blackford 2000-11-20
{}
#### Copy logs file to another path laravel I want to get a copy of the logs file using programmability, I just tried to copy the files using this command $co= Storage::copy('logs/laravel-'.$start.'.log',$filename); dd($co); but I have an error which says the file does not exist, I believe that is because storage looking in the default driver of the config file, how can I tell laravel to look in logs folder when I want to run the copy command, is there any way to do that . Source: Laravel
{}
in ✏️ Articles · meta # Is it time to build my own CMS? Today I have been thinking about the possibility of writing my own CMS. A very simple CMS. I even hinted that on a small micro post today. This is not a new idea. I’m not going to look at the previous posts where I talked about this, but it’s definitely at least 2 or 3. “But why?”, you ask. Well, the answer is more complicated than I would like it to be. There is a number of features that I want to implement that will just make my website inherently more dynamic. And as @jlelse once said “it’s almost questionable why I use a static page generator at all”. Right now, my “CMS” is, in fact, a simple wrapper around Hugo with a few bells and whistles. Also, besides wanting to have a simple CMS, I also want to build a simple CLI that allows me to locally search, list, add and edit files, as if it was my own notebook. For that, I am taking some inspirations from nb, which the authors perfectly describe as: CLI plain-text note-taking, bookmarking, and archiving with encryption, filtering and search, Git-backed versioning and syncing, Pandoc-backed conversion, and more in a single portable script. For the sake of clarity, here are the features/things I want to implement/change on my website: 1. Dashboard that allows me to… • create, update and delete posts. With this, I would remove my Micropub endpoint. Right now, it is a bit of an hassle to transform between the Micropub format to the internal format and vice-versa. More than that: I don’t even support all the features I would like to. 2. Webmentions would still be available but there would be a native comments box on the bottom of each page, where everyone could leave their own comment without relying on commentpara.de. 3. Improve the current search functionality. Currently, I support full text search but it is a bit cumbersome and hidden. 4. Revive the bookmarks section! 5. Stop relying on GoodReads for my reading section and make every reading activity an actual post. 6. As mentioned before, a CLI that would allow me to manage this things locally. Besides, this CLI would also ensure that the changes are automatically committed and pushed. Those are the main features and changes to the functionality of the website. In addition, there’s some inner workings that I would like to change. I feel that the current files hierarchy just makes everything complicated for me to access: all posts are a directory with an index, a webmentions JSON file, plus some other files that I might need. I want to separate the written content from the images and other special data. With a new CMS, I would have more flexibility on how to name my files and I’d just move all my media to BunnyCDN. ## Should I Go or should I Rust? That’s a very good question. I have never used Rust in my life, but everyone talks about it! Is it that good? I know I would code faster in Go because I know it. But should I try going for the shiny “new” thing and learn something new? So you know, these are my current inspirations: Only Xe/site is in Rust. Ugh. I don’t really know. I tried to setup a small CLI in Rust and it too so much time to compile! Besides, I’m afraid of going deeper into Rust and then regretting. What would you do? What do you think? I will definitely appreciate your opinion on this! Or if you don't know what a response is, you can always write a webmention comment (you don't need to know what that is). 2 interactions said: Quick note, there is error in link destinations at the end of the post. All directing to https://github.com/xwmx/nb  . I too want to build a CMS! The convenience of editing anywhere, desktop or mobile devices, like using WordPress… I think that would improve my willingness to write posts, or …17 Feb 2021 02:55 said: Quick note, there is error in link destinations at the end of the post. All directing to https://github.com/xwmx/nb  . I too want to build a CMS! The convenience of editing anywhere, desktop or mobile devices, like using WordPress… I think that would improve my willingness to write posts, or …17 Feb 2021 02:55
{}
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) The asymptotic expansion of a generalised incomplete gamma function. (English) Zbl 1041.33002 The generalization has the form $\Gamma_p(a,z)=\int_z^\infty t^{a-1} F_{2p}(t)\,dt$, where $p=1,2,3,\ldots$ and $$F_{2p}(t)=\sum_{k=0}^\infty (-1)^k {z^{k/p}\ \Gamma((2k+1)/(2p))\over k!\ \Gamma(k+1/2)}.$$ Because $F_2(t)=e^{-t}$, the function $\Gamma_1(a,z)$ is the standard incomplete gamma function. It is shown that the large-$z$ asymptotics of $\Gamma_p(a,z)$ in the sector $\vert \text{arg}\,z\vert <p\pi$ consists of $p$ exponential expansions. In $\operatorname{Re} z>0$, all these expansions are recessive at infinity and form a sequence of increasingly subdominant exponential contributions. A numerical example is included for $p=3$ and $a=1$. ##### MSC: 33B20 Incomplete beta and gamma functions 41A60 Asymptotic approximations, asymptotic expansions (steepest descent, etc.) Full Text: ##### References: [1] M. Abramowitz, I. Stegun (Ed.), Handbook of Mathematical Functions, Dover, New York, 1964. · Zbl 0171.38503 [2] Braaksma, B. L. J.: Asymptotic expansions and analytic continuations for a class of Barnes integrals. Compos. math. 15, 239-341 (1963) · Zbl 0129.28604 [3] Chaudhry, M. A.; Temme, N. M.; Veling, E. J. M.: Asymptotics and closed form of a generalized incomplete gamma function. J. comput. Appl. math. 67, 371-379 (1996) · Zbl 0853.33003 [4] Chaudhry, M. A.; Zubair, S. M.: Generalized incomplete gamma functions with applications. J. comput. Appl. math. 55, 99-124 (1994) · Zbl 0833.33002 [5] M.A. Chaudhry, S.M. Zubair, On a Class of Incomplete Gamma Functions with Applications, Chapman and Hall, New York/CRC, Boca Raton, 2001. · Zbl 1028.33003 [6] Guthmann, A.: Asymptotische entwicklungen für unvollständige gammafunktionen. Forum math. 3, 105-141 (1991) · Zbl 0716.33001 [7] Lavrik, A. F.: An approximate functional equation for the Dirichlet L-function. Trans. Moscow math. Soc. 18, 101-115 (1968) · Zbl 0195.33301 [8] R.B. Paris, A generalisation of Lavrik’s expansion for the Riemann zeta function, Technical Report MACS (94:01), University of Abertay Dundee, 1994. [9] Paris, R. B.; Cang, S.: An asymptotic representation for ${\zeta}$( 1 2+it). Methods. appl. Anal. 4, 449-470 (1997) · Zbl 0913.11033 [10] R.B. Paris, D. Kaminski, Asymptotics and Mellin--Barnes Integrals, Cambridge University Press, Cambridge, 2001. [11] R.B. Paris, A.D. Wood, Asymptotics of High Order Differential Equations, Pitman Research Notes in Mathematics Series, Vol. 129, Longman Scientific and Technical, Harlow, 1986. · Zbl 0644.34052 [12] Temme, N. M.: The asymptotic expansions of the incomplete gamma functions. SIAM J. Math. anal. 10, 757-766 (1979) · Zbl 0412.33001 [13] Temme, N. M.: Special functions: an introduction to the classical functions of mathematical physics. (1996) · Zbl 0856.33001 [14] Whittaker, E. T.; Watson, G. N.: Modern analysis. (1965) · Zbl 0108.26903
{}
: References : Two dimensional anelastic model : 5 Radiation 6 Ground surface The grand temperature is calculated by the 1D thermal conduction equation. (55) where is the grand temperature (K), is the soil density (kgm), is the specific heat of soil (JkgK), and is the thermal conductivity (WmK). The surface temperature is given by . The boundary condition at the surface is given as follows. (56) where is the solar radiative flux at the surface (the sign of downward flux is positive), is the surface albedo, is the net infrared radiative flux emitted from the surface and is the sensible heat flux (the sign of upward flux is positive). The lower boundary of the grand surface is given as a insulation boundary. Parameters The values of soil density, thermal conductivity and specific heat are same as those of standard model of Kieffer et al. (1977). Parameters Standard values Note 0.25 Kieffer et al. (1977) 1650 kgm $B!7(B 588 JKkg $B!7(B 7.63 JKmsec $B!7(B By using these values, the thermal inertia is 272 WmsecK and the diurnal skin depth of is about 8.2 cm. : References : Two dimensional anelastic model : 5 Radiation Odaka Masatsugu $BJ?@.(B19$BG/(B4$B7n(B25\$BF|(B
{}
## 9.2 Regression with ARIMA errors in R The R function Arima() will fit a regression model with ARIMA errors if the argument xreg is used. The order argument specifies the order of the ARIMA error model. If differencing is specified, then the differencing is applied to all variables in the regression model before the model is estimated. For example, the R command will fit the model $$y_t' = \beta_1 x'_t + \eta'_t$$, where $$\eta'_t = \phi_1 \eta'_{t-1} + \varepsilon_t$$ is an AR(1) error. This is equivalent to the model $y_t = \beta_0 + \beta_1 x_t + \eta_t,$ where $$\eta_t$$ is an ARIMA(1,1,0) error. Notice that the constant term disappears due to the differencing. To include a constant in the differenced model, specify include.drift=TRUE. The auto.arima() function will also handle regression terms via the xreg argument. The user must specify the predictor variables to include, but auto.arima() will select the best ARIMA model for the errors. If differencing is required, then all variables are differenced during the estimation process, although the final model will be expressed in terms of the original variables. The AICc is calculated for the final model, and this value can be used to determine the best predictors. That is, the procedure should be repeated for all subsets of predictors to be considered, and the model with the lowest AICc value selected. ### Example: US Personal Consumption and Income Figure 9.1 shows the quarterly changes in personal consumption expenditure and personal disposable income from 1970 to 2016 Q3. We would like to forecast changes in expenditure based on changes in income. A change in income does not necessarily translate to an instant change in consumption (e.g., after the loss of a job, it may take a few months for expenses to be reduced to allow for the new circumstances). However, we will ignore this complexity in this example and try to measure the instantaneous effect of the average change of income on the average change of consumption expenditure. The data are clearly already stationary (as we are considering percentage changes rather than raw expenditure and income), so there is no need for any differencing. The fitted model is \begin{align*} y_t &= 0.599 + 0.203 x_t + \eta_t, \\ \eta_t &= 0.692 \eta_{t-1} + \varepsilon_t -0.576 \varepsilon_{t-1} + 0.198 \varepsilon_{t-2},\\ \varepsilon_t &\sim \text{NID}(0,0.322). \end{align*} We can recover estimates of both the $$\eta_t$$ and $$\varepsilon_t$$ series using the residuals() function. It is the ARIMA errors that should resemble a white noise series.
{}
# McAfee Mystery. - Geeks to Go Forums ## McAfee Mystery. ### #16phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 09:51 AM Thanks again for the screenshot As the AVG is not on either the boot or recovery partition the AVG uninstaller tool should remove it if run on the D: partition with no problem but like I said I will get this confirmed for you and post back. ### #17Wrinkly Pete • Group: Member • Posts: 118 • Joined: 11-January 09 Posted 12 May 2012 - 10:01 AM OK, thanks. I'll wait and see. I wondered if I could just delete the AVG file, as it has never been installed. Revo un-installer suggested as much didn't it, when it said "No installation Package Found! Tip: Get more information for the application from the main windows and try to uninstall it manually!" ### #18happyrock • Group: Moderator • Posts: 9,285 • Joined: 16-May 06 Posted 12 May 2012 - 11:51 AM Quote I wondered if I could just delete the AVG file yep...first try deleting the whole folder... nojoy go into the folder and select a bunch of file then delete them... if it fights you get moveonboot ### #19phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 12:07 PM Thanks happyrock ### #20Wrinkly Pete • Group: Member • Posts: 118 • Joined: 11-January 09 Posted 12 May 2012 - 12:16 PM Right, I'll give that a go. I'll set a Restore Point and just delete the AVG file and its contents. Then I'll restart the PC, and provided I get that far (joking), I'll carry on as normal. IF I encounter a problem I can then either restore the AVG file from the Recycle Bin or Restore the PC to my Restore Point, but it's fairly unlikely to be a problem as AVG 9 was never installed on this PC and I use M.S.E. now anyway. I really can't see losing the file/folder making any difference. Thanks for the support everyone - you're great! ### #21phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 12:43 PM You are welcome, good luck and let us know how it works out. ### #22Wrinkly Pete • Group: Member • Posts: 118 • Joined: 11-January 09 Posted 12 May 2012 - 01:01 PM Hopefully, that's it all done! You guys are really appreciated. I don't know what I'd do without you. With luck you won't hear from me again - until my next panic! ### #23phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 01:07 PM How did you get rid of it? ### #24Wrinkly Pete • Group: Member • Posts: 118 • Joined: 11-January 09 Posted 12 May 2012 - 01:12 PM Just deleted the AVG folder in (D:) manually. I've got the Restore Point I set, should I require it, and I'll leave the deleted folder in the Recycle Bin for a few days, so I can recover the folder from there too if I needed to. ### #25phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 01:26 PM You can guarantee that if you had not taken the precautions that you have it would have gone wrong. ### #26Wrinkly Pete • Group: Member • Posts: 118 • Joined: 11-January 09 Posted 12 May 2012 - 01:34 PM Don't I know it! I think I was Bill Gate's test pilot for Windows XP when it first came out. I only had to glance at my PC sideways and it would crash. Good job my favourite colour was Blue, as that was the main colour of my monitor a lot of the time. ### #27phillpower2 • Group: Technician • Posts: 11,022 • Joined: 25-August 09 Posted 12 May 2012 - 01:39 PM ### #28happyrock • Group: Moderator • Posts: 9,285 • Joined: 16-May 06 Posted 12 May 2012 - 03:13 PM Quote I think I was Bill Gate's test pilot for Windows XP when it first came out we all were...
{}
User littleo - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T03:48:01Z http://mathoverflow.net/feeds/user/16845 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/92939/is-that-true-all-the-convex-optimization-problems-can-be-solved-in-polynomial-tim/92944#92944 Answer by littleO for Is that true all the convex optimization problems can be solved in polynomial time using interior-point algorithms littleO 2012-04-03T00:18:47Z 2012-04-03T00:18:47Z <p>If I understand correctly, interior-point algorithms require the objective and constraint functions to have a certain amount of smoothness.</p> http://mathoverflow.net/questions/73919/system-of-lin-eqations-resulting-from-a-weighted-graph-how-to-solve-this-numer/74153#74153 Answer by littleO for System of Lin. Eqations, resulting from a weighted graph. How to solve this numerically? littleO 2011-08-31T10:30:00Z 2011-08-31T10:30:00Z <p>The answer depends on what properties your system of equations has. Is the coefficient matrix symmetric positive definite? Symmetric indefinite? Not symmetric? Is the coefficient matrix sparse?</p> <p>Trefethen's book Numerical Linear Algebra is a nice book on this topic.</p> <p>My impression is that people normally use Lapack to solve linear systems in C++, if they don't implement their own method. I could be wrong.</p> http://mathoverflow.net/questions/64144/converting-or-approximating-a-non-differentiable-function-to-a-differentiable-f/71715#71715 Answer by littleO for Converting (or approximating) a non-differentiable function to a differentiable function littleO 2011-07-31T08:08:48Z 2011-07-31T08:08:48Z <p>What if you add to your objective function the indicator function of $(-\infty,0]$, and then solved your problem with the proximal gradient method or FISTA ?</p>
{}
Mostrar: 20 | 50 | 100 Resultados 1 - 20 de 93.778 Filtrar Mais filtros Assunto principal Tipo de estudo Idioma Intervalo de ano de publicação 1. Lancet Public Health ; 5(10): e524, 2020 10. Artigo em Inglês | MEDLINE | ID: mdl-33007210 2. Artigo em Inglês | MEDLINE | ID: mdl-33007976 ##### RESUMO BACKGROUND: Understanding SARS-CoV-2 dynamics and transmission is a serious issue. Its propagation needs to be modeled and controlled. The Alsace region in the East of France has been among the first French COVID-19 clusters in 2020. METHODS: We confront evidence from three independent and retrospective sources: a population-based survey through internet, an analysis of the medical records from hospital emergency care services, and a review of medical biology laboratory data. We also check the role played in virus propagation by a large religious meeting that gathered over 2000 participants from all over France mid-February in Mulhouse. RESULTS: Our results suggest that SARS-CoV-2 was circulating several weeks before the first officially recognized case in Alsace on 26 February 2020 and the sanitary alert on 3 March 2020. The religious gathering seems to have played a role for secondary dissemination of the epidemic in France, but not in creating the local outbreak. CONCLUSIONS: Our results illustrate how the integration of data coming from multiple sources could help trigger an early alarm in the context of an emerging disease. Good information data systems, able to produce earlier alerts, could have avoided a general lockdown in France. ##### Assuntos Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/transmissão , Pneumonia Viral/epidemiologia , Pneumonia Viral/transmissão , Betacoronavirus , Monitoramento Epidemiológico , França/epidemiologia , Humanos , Comportamento de Massa , Pandemias , Estudos Retrospectivos 3. J Environ Qual ; 49(4): 921-932, 2020 Jul. Artigo em Inglês | MEDLINE | ID: mdl-33016496 ##### RESUMO Colloids (1-1,000 nm) are important phosphorus (P) carriers in agricultural soils. However, most studies are based on colloids from soil waters extracted in the laboratory, thus limiting the understanding of the natural transfer of colloidal P along the soil-to-stream continuum. Here, we conducted a field study on the colloidal P in both natural soil waters and their adjacent stream waters in an agricultural catchment (Kervidy-Naizin, western France). Soil waters (10-15 cm, Albeluvisol) of two riparian wetlands and the adjacent stream waters were sampled monthly during wet seasons of the 2015-2016 hydrological year (seven dates in total). Ultrafiltration at three pore sizes (5 kDa, 30 kDa, and 0.45 µm) was combined with inductively coupled plasma mass spectrometry (ICP-MS) to investigate variability in colloidal P concentration and its concomitant elemental composition. Results showed that colloidal P represented, on average, 45 and 30% of the total P (<0.45 µm) in the soil waters and stream waters, respectively. We found that colloidal P was preferentially associated with (a) organic carbon in the fine nanoparticle fraction (5-30 kDa) and (b) iron-oxyhydroxides and organic carbon in the coarse colloidal fraction (30 kDa-0.45 µm). The results confirmed that colloidal P is an important component of total P in both soil waters and stream waters under field conditions, suggesting that riparian wetlands are hotspot zones for the production of colloidal P at the catchment scale, which has the potential to be transported to adjacent streams. ##### Assuntos Rios , Solo , Coloides , França , Fósforo/análise 4. Cancer Radiother ; 24(6-7): 762-767, 2020 Oct. Artigo em Francês | MEDLINE | ID: mdl-32873486 ##### RESUMO Health data financed by the French national solidarity system constitute a common heritage. Such data should be exploited to optimize care while complying with ethics and fundamental rights of citizens. The creation of the Health Data Hub (HDH) was allowed by the 24 July 2019 Law on the organization and transformation of the French health system. Its objective is to enable authorized innovative project leaders to access non-nominative data via a state-of-the-art secure technological platform. It appears to be one of the strong points of the French Artificial Intelligence strategy. This structure is a public interest group which associates 56 stakeholders, mostly from the public authorities. It implements, in partnership with the National Health Insurance Fund, the major strategic orientations relating to the National Health Data System set by the French State and the Ministry of Solidarity and Health. The Health Data Hub allows cross-reference of consolidated databases with SNDS data. Several use cases are under construction. The creation of relational databases in radiation oncology is also possible through specific strategies to get pseudonymized data from the various radiotherapy software programs upstream of the Health Data Hub. ##### Assuntos 5. Epidemiol Infect ; 148: e221, 2020 09 22. Artigo em Inglês | MEDLINE | ID: mdl-32958091 ##### RESUMO The main objective of this paper is to address the following question: are the containment measures imposed by most of the world governments effective and sufficient to stop the epidemic of COVID-19 beyond the lock-down period? In this paper, we propose a mathematical model which allows us to investigate and analyse this problem. We show by means of the reproductive number, ${\cal R}_0$ that the containment measures appear to have slowed the growth of the outbreak. Nevertheless, these measures remain only effective as long as a very large fraction of population, p, greater than the critical value $1-1/{\cal R}_0$ remains confined. Using French current data, we give some simulation experiments with five scenarios including: (i) the validation of model with p estimated to 93%, (ii) the study of the effectiveness of containment measures, (iii) the study of the effectiveness of the large-scale testing, (iv) the study of the social distancing and wearing masks measures and (v) the study taking into account the combination of the large-scale test of detection of infected individuals and the social distancing with linear progressive easing of restrictions. The latter scenario was shown to be effective at overcoming the outbreak if the transmission rate decreases to 75% and the number of tests of detection is multiplied by three. We also noticed that if the measures studied in our five scenarios are taken separately then the second wave might occur at least as far as the parameter values remain unchanged. ##### Assuntos Controle de Doenças Transmissíveis/métodos , Infecções por Coronavirus/prevenção & controle , Pandemias/prevenção & controle , Pneumonia Viral/prevenção & controle , Betacoronavirus , Simulação por Computador , Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/transmissão , França/epidemiologia , Humanos , Modelos Teóricos , Pneumonia Viral/epidemiologia , Pneumonia Viral/transmissão , Reprodutibilidade dos Testes 6. PLoS One ; 15(9): e0239573, 2020. Artigo em Inglês | MEDLINE | ID: mdl-32970772 ##### RESUMO INTRODUCTION: Severe acute respiratory syndrome coronavirus2 has caused a global pandemic of coronavirus disease 2019 (COVID-19). High-density lipoproteins (HDLs), particles chiefly known for their reverse cholesterol transport function, also display pleiotropic properties, including anti-inflammatory or antioxidant functions. HDLs and low-density lipoproteins (LDLs) can neutralize lipopolysaccharides and increase bacterial clearance. HDL cholesterol (HDL-C) and LDL cholesterol (LDL-C) decrease during bacterial sepsis, and an association has been reported between low lipoprotein levels and poor patient outcomes. The goal of this study was to characterize the lipoprotein profiles of severe ICU patients hospitalized for COVID-19 pneumonia and to assess their changes during bacterial ventilator-associated pneumonia (VAP) superinfection. METHODS: A prospective study was conducted in a university hospital ICU. All consecutive patients admitted for COVID-19 pneumonia were included. Lipoprotein levels were assessed at admission and daily thereafter. The assessed outcomes were survival at 28 days and the incidence of VAP. RESULTS: A total of 48 patients were included. Upon admission, lipoprotein concentrations were low, typically under the reference values ([HDL-C] = 0.7[0.5-0.9] mmol/L; [LDL-C] = 1.8[1.3-2.3] mmol/L). A statistically significant increase in HDL-C and LDL-C over time during the ICU stay was found. There was no relationship between HDL-C and LDL-C concentrations and mortality on day 28 (log-rank p = 0.554 and p = 0.083, respectively). A comparison of alive and dead patients on day 28 did not reveal any differences in HDL-C and LDL-C concentrations over time. Bacterial VAP was frequent (64%). An association was observed between HDL-C and LDL-C concentrations on the day of the first VAP diagnosis and mortality ([HDL-C] = 0.6[0.5-0.9] mmol/L in survivors vs. [HDL-C] = 0.5[0.3-0.6] mmol/L in nonsurvivors, p = 0.036; [LDL-C] = 2.2[1.9-3.0] mmol/L in survivors vs. [LDL-C] = 1.3[0.9-2.0] mmol/L in nonsurvivors, p = 0.006). CONCLUSION: HDL-C and LDL-C concentrations upon ICU admission are low in severe COVID-19 pneumonia patients but are not associated with poor outcomes. However, low lipoprotein concentrations in the case of bacterial superinfection during ICU hospitalization are associated with mortality, which reinforces the potential role of these particles during bacterial sepsis. ##### Assuntos HDL-Colesterol/sangue , LDL-Colesterol/sangue , Infecções por Coronavirus/sangue , Pneumonia Bacteriana/sangue , Pneumonia Associada à Ventilação Mecânica/sangue , Pneumonia Viral/sangue , Superinfecção/sangue , Idoso , Betacoronavirus , Infecções por Coronavirus/mortalidade , Feminino , França , Hospitais Universitários , Humanos , Unidades de Terapia Intensiva , Masculino , Pessoa de Meia-Idade , Pandemias , Pneumonia Bacteriana/mortalidade , Pneumonia Associada à Ventilação Mecânica/mortalidade , Pneumonia Viral/mortalidade , Estudos Prospectivos 7. Rev Infirm ; 69(263): 37-39, 2020. Artigo em Francês | MEDLINE | ID: mdl-32993905 ##### RESUMO Covid-19: psychological support programmes. The spread of Covid-19 in France, the confinement of the population and the changes to our way of life as a result of the health crisis have caused psychological distress to many people of all ages and conditions. In response to these problems, numerous remote psychological support programmes have been set up through teleconsultations. PsyCovid-19, created at Cadillac psychiatric hospital, is one such example. ##### Assuntos Infecções por Coronavirus/psicologia , Pneumonia Viral/psicologia , Angústia Psicológica , Sistemas de Apoio Psicossocial , Telemedicina , Betacoronavirus , França/epidemiologia , Humanos , Pandemias 8. Bull Cancer ; 107(9): 867-880, 2020 Sep. Artigo em Francês | MEDLINE | ID: mdl-32919610 ##### RESUMO INTRODUCTION: Few studies have explored the long-term occupational situation after cancer. The aim of our study were to study the employment status among long-term cancer survivors and to compare it to cancer-free controls from the general population at 5, 10 or 15 years after cancer diagnosis. METHODS: From data of a registry-based study, long-term survivors from breast,cervical and colorectal cancer, randomly selected from three tumor registries in France, were compared to cancer-free controls randomly selected from electoral lists. We selected active cancer survivors and cancer-free controls aged less than 60 at the time of the survey. We have studied the employment status of cases vs. controls and the factors associated with employment status. RESULTS: At 5, 10 or 15 years after diagnosis, we did not observe any significant difference in employment status between cases and controls. Among cases, 17% had lost their jobs. Older age, lower incomes, lower education, a short-term employment contract, the presence of co-morbidities, fatigue and a worse quality of life were associated with job loss. DISCUSSION: Although the employment status of the cases was comparable to that of the controls, efforts should be intensified to make it easier for patients diagnosed with cancer to return to work. ##### Assuntos Sobreviventes de Câncer , Emprego/estatística & dados numéricos , Adulto , Neoplasias da Mama/terapia , Neoplasias Colorretais/terapia , Estudos Transversais , Feminino , França , Humanos , Masculino , Pessoa de Meia-Idade , Sistema de Registros , Fatores de Tempo , Neoplasias do Colo do Útero/terapia , Adulto Jovem 9. BMC Infect Dis ; 20(1): 682, 2020 Sep 17. Artigo em Inglês | MEDLINE | ID: mdl-32942989 ##### RESUMO BACKGROUND: Enterobacter cloacae species is responsible for nosocomial outbreaks in vulnerable patients in neonatal intensive care units (NICU). The environment can constitute the reservoir and source of infection in NICUs. Herein we report the impact of preventive measures implemented after an Enterobacter cloacae outbreak inside a NICU. METHODS: This retrospective study was conducted in one level 3 NICU in Lyon, France, over a 6 year-period (2012-2018). After an outbreak of Enterobacter cloacae infections in hospitalized neonates in 2013, several measures were implemented including intensive biocleaning and education of medical staff. Clinical and microbiological characteristics of infected patients and evolution of colonization/infection with Enterobacter spp. in this NICU were retrieved. Moreover, whole genome sequencing was performed on 6 outbreak strains. RESULTS: Enterobacter spp. was isolated in 469 patients and 30 patients developed an infection including 2 meningitis and 12 fatal cases. Preventive measures and education of medical staff were not associated with a significant decrease in patient colonisation but led to a persistent decreased use of cephalosporin in the NICU. Infection strains were genetically diverse, supporting the hypothesis of multiple hygiene defects rather than the diffusion of a single clone. CONCLUSIONS: Grouped cases of infections inside one setting are not necessarily related to a single-clone outbreak and could reveal other environmental and organisational problematics. The fight against implementation and transmission of Enterobacter spp. in NICUs remains a major challenge. ##### Assuntos Enterobacter cloacae/patogenicidade , Infecções por Enterobacteriaceae/epidemiologia , Infecções por Enterobacteriaceae/prevenção & controle , Controle de Infecções/métodos , Surtos de Doenças/prevenção & controle , Enterobacter cloacae/genética , Enterobacter cloacae/isolamento & purificação , Infecções por Enterobacteriaceae/microbiologia , Fezes/microbiologia , Feminino , França , Humanos , Higiene , Recém-Nascido , Unidades de Terapia Intensiva Neonatal/estatística & dados numéricos , Masculino , Sepse Neonatal/epidemiologia , Sepse Neonatal/microbiologia , Estudos Retrospectivos , Sequenciamento Completo do Genoma 10. Sante Publique ; 32(2): 247-251, 2020. Artigo em Francês | MEDLINE | ID: mdl-32985841 ##### RESUMO The COVID-19 Coronavirus epidemic started in December 2019 in China, and progressed very quickly in France. Its consequences were the implementation of national measures such as the containment of the population, but also a disorganization of the healthcare system, in particular concerning oral care. Indeed, dental procedures produce aerosols which can be loaded with viral particles, and as such, constitute a major contamination route by the virus. At the request of the Conference of Deans of the Faculties of Odontology, the National College of University Dentists in Public Health (CNCDUSP) set up a working group in order to issue recommendations for oral care in the context of the COVID-19 epidemic, given the specific risks faced by practitioners. Considering the lack of awareness of the specifics of dentistry in the medical world and among decision-makers, and given the speed with which national measures to fight the epidemic were implemented, the recommendations of the CNCDUSP had to be drawn up rigorously and quickly before being released to the profession. They take into account epidemiological data related to the virus, the specificities of oral care, and thus propose protective measures for dental surgery professionals.The necessary adaptation of the healthcare system during an epidemic will certainly make it possible to learn lessons from this health crisis. ##### Assuntos Infecções por Coronavirus/epidemiologia , Assistência Odontológica/organização & administração , Epidemias , Pneumonia Viral/epidemiologia , França/epidemiologia , Humanos , Pandemias 11. Lancet Respir Med ; 8(10): e73, 2020 10. Artigo em Inglês | MEDLINE | ID: mdl-32941850 12. Lancet Public Health ; 5(10): e536-e542, 2020 10. Artigo em Inglês | MEDLINE | ID: mdl-32950075 ##### Assuntos Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/prevenção & controle , Infarto do Miocárdio/terapia , Pandemias/prevenção & controle , Admissão do Paciente/estatística & dados numéricos , Pneumonia Viral/epidemiologia , Pneumonia Viral/prevenção & controle , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Feminino , França/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Prevalência , Sistema de Registros , Fatores de Risco 13. Ann Ist Super Sanita ; 56(3): 373-377, 2020. Artigo em Inglês | MEDLINE | ID: mdl-32959804 ##### RESUMO We aimed to compare COVID-19-specific and all-cause mortality rates among natives and migrants in Italy and to investigate the clinical characteristics of individuals dying with COVID-19 by native/migrant status. The mortality rates and detailed clinical characteristics of natives and migrants dying with COVID-19 were explored by considering the medical charts of a representative sample of patients deceased in Italian hospitals (n = 2,687) between February 21st and April 29th, 2020. The migrant or native status was assigned based on the individual's country of birth. The expected all-cause mortality among natives and migrants living in Italy was derived by the last available (2018) dataset provided by the Italian National Institute of Statistics. Overall, 68 individuals with a migration background were identified. The proportions of natives and migrants among the COVID-19-related deaths (97.5% and 2.5%, respectively) were similar to the relative all-cause mortality rates estimated in Italy in 2018 (97.4% and 2.6%, respectively). The clinical phenotype of migrants dying with COVID-19 was similar to that of natives except for the younger age at death. International migrants living in Italy do not have a mortality advantage for COVID-19 and are exposed to the risk of poor outcomes as their native counterparts. ##### Assuntos 14. Rev Infirm ; 69(263): 43-45, 2020. Artigo em Francês | MEDLINE | ID: mdl-32993907 ##### RESUMO Student nurses at the heart of the Covid-19 crisis. Many student nurses were involved in dealing with the Covid-19 health crisis. As a consequence, the block release training programme was completely overturned in order to meet the urgent requirements of health and medical-social institutions. Two student nurses from Île-de-France, in their third year of training, share anonymously their experience on the ground during the health crisis. Their experiences, which required polyvalence, adaptability, stress management and autonomy on their part, have considerably enriched their portfolio of competencies. ##### Assuntos Infecções por Coronavirus/enfermagem , Educação em Enfermagem/organização & administração , Pandemias , Pneumonia Viral/enfermagem , Estudantes de Enfermagem/psicologia , Infecções por Coronavirus/epidemiologia , França/epidemiologia , Humanos , Pneumonia Viral/epidemiologia 15. AIDS ; 34(12): 1765-1770, 2020 10 01. Artigo em Inglês | MEDLINE | ID: mdl-32889852 ##### RESUMO OBJECTIVE: A new coronavirus severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) emerged in China during late 2019 and resulted in the coronavirus disease 2019 (COVID-19) pandemic which peaked in France in March-April 2020. Immunodeficiency, precariousness and promiscuity could increase the risk of COVID-19 in HIV-infected patients and in preexposure prophylaxis (PrEP) users. No epidemiological data are available in these two populations. We report COVID-19 attack rate in HIV-infected patients and in PrEP users in the Rhône department, France, and compared it with the general population. DESIGN: Retrospective analysis of a laboratory database. METHODS: COVID-19 testing strategy in France was centered on symptomatic infections, hospitalized patients and symptomatic healthcare workers while most asymptomatic cases were not confirmed. SARS-CoV-2 positivity rate on PCR assays and COVID-19 attack rate were determined in HIV-infected patients and in PrEP users. COVID-19 attack rate in the general population was estimated from health authorities' database and demographic data. A corrected attack rate taking into account the laboratory representativeness was calculated. RESULTS: From March to April 2020, 24 860 samples from 19 113 patients (HIV-infected 77, PrEP users 27, others 19 009) were assessed for SARS-CoV-2 PCR assay. The positivity rate appeared similar in HIV-infected patients (15.6%), in PrEP users (14.8%) and in other patients (19.1%). The crude/corrected COVID-19 attack rate appeared similar in HIV-infected patients (0.31/0.38%) and in PrEP users (0.38/0.42%), and of the same order as the estimated attack rate in the general population (0.24%). CONCLUSION: The risk of symptomatic COVID-19 in France appeared similar in HIV-infected patients and in PrEP users compared with the general population. ##### Assuntos Infecções por Coronavirus/epidemiologia , Infecções por HIV/complicações , Pneumonia Viral/epidemiologia , Profilaxia Pré-Exposição , Adulto , Idoso , Betacoronavirus , Técnicas de Laboratório Clínico , Infecções por Coronavirus/diagnóstico , Bases de Dados Factuais , Feminino , França/epidemiologia , Infecções por HIV/prevenção & controle , Humanos , Incidência , Modelos Logísticos , Masculino , Pessoa de Meia-Idade , Análise Multivariada , Pandemias , Estudos Retrospectivos , Fatores de Risco 16. BMC Public Health ; 20(1): 1393, 2020 Sep 12. Artigo em Inglês | MEDLINE | ID: mdl-32919467 ##### RESUMO BACKGROUND: Seine-Saint-Denis is a deprived departement (French administrative unit) in the North-East of Paris, France, hosting the majority of South Asian migrants in France. In recent years, the number of migrants from Pakistan, which has a high prevalence of hepatitis C globally, increased. As a corollary, this study addressed the high proportion of Pakistani patients in the infectious diseases clinic of a local hospital, diagnosed with hepatitis C, but also hepatitis B and Human Immunodeficiency Virus (HIV). It explored genealogies and beliefs about hepatitis and HIV transmission, including community, sexual and blood risk behaviours. The aim was to understand the ways these risk factors reduce or intensify both en route and once in France, in order to devise specific forms of community health intervention. METHODS: The study took place at Avicenne University-Hospital in Seine-Saint-Denis, and its environs, between July and September 2018. The design of the study was qualitative, combining semi-structured interviews, a focus group discussion, and ethnographic observations. The sample of Pakistani participants was selected from those followed-up for chronic hepatitis C, B, and/or HIV at Avicenne, and who had arrived after 2010 in Seine-Saint-Denis. RESULTS: Thirteen semi-structured interviews were conducted, until saturation was reached. All participants were men from rural Punjab province. Most took the Eastern Mediterranean human smuggling route. Findings suggest that vulnerabilities to hepatitis and HIV transmission, originating in Pakistan, are intensified along the migration route and perpetuated in France. Taboo towards sexuality, promiscuity in cohabitation conditions, lack of knowledge about transmission were amongst the factors increasing vulnerabilities. Participants suggested a number of culturally-acceptable health promotion interventions in the community, such as outreach awareness and testing campaigns in workplaces, health promotion and education in mosques, as well as web-based sexual health promotion tools to preserve anonymity. CONCLUSIONS: Our findings highlight the need to look at specific groups at risk, related to their countries of origin. In-depth understandings of such groups, using interdisciplinary approaches such as were employed here, can allow for culturally adapted, tailored interventions. However, French colour-blind policies do not easily permit such kinds of targeted approach and this limitation requires further debate. ##### Assuntos Emigração e Imigração , Infecções por HIV/prevenção & controle , Promoção da Saúde , Hepatite B/prevenção & controle , Hepatite C/prevenção & controle , Assunção de Riscos , Migrantes , Adulto , Cultura , Grupos Étnicos , França , Conhecimentos, Atitudes e Prática em Saúde , Hepacivirus , Hepatite B Crônica/prevenção & controle , Hepatite C Crônica/prevenção & controle , Humanos , Masculino , Pessoa de Meia-Idade , Paquistão , Pesquisa Qualitativa , Fatores de Risco , Comportamento Sexual , População Suburbana , Adulto Jovem 17. J Prev Alzheimers Dis ; 7(4): 301-304, 2020. Artigo em Inglês | MEDLINE | ID: mdl-32920637 18. Lancet ; 396(10253): 757-758, 2020 09 12. Artigo em Inglês | MEDLINE | ID: mdl-32919512 19. JMIR Mhealth Uhealth ; 8(9): e23153, 2020 09 24. Artigo em Inglês | MEDLINE | ID: mdl-32924946 ##### RESUMO BACKGROUND: Critical care teams are on the front line of managing the COVID-19 pandemic, which is stressful for members of these teams. OBJECTIVE: Our objective was to assess whether the use of social networks is associated with increased anxiety related to the COVID-19 pandemic among members of critical care teams. METHODS: We distributed a web-based survey to physicians, residents, registered and auxiliary nurses, and nurse anesthetists providing critical care (anesthesiology, intensive care, or emergency medicine) in several French hospitals. The survey evaluated the respondents' use of social networks, their sources of information on COVID-19, and their levels of anxiety and information regarding COVID-19 on analog scales from 0 to 10. RESULTS: We included 641 respondents in the final analysis; 553 (86.3%) used social networks, spending a median time of 60 minutes (IQR 30-90) per day on these networks. COVID-19-related anxiety was higher in social network users than in health care workers who did not use these networks (median 6, IQR 5-8 vs median 5, IQR 3-7) in univariate (P=.02) and multivariate (P<.001) analyses, with an average anxiety increase of 10% in social network users. Anxiety was higher among health care workers using social networks to obtain information on COVID-19 than among those using other sources (median 6, IQR 5-8 vs median 6, IQR 4-7; P=.04). Social network users considered that they were less informed about COVID-19 than those who did not use social networks (median 8, IQR 7-9 vs median 7, IQR 6-8; P<.01). CONCLUSIONS: Our results suggest that social networks contribute to increased anxiety in critical care teams. To protect their mental health, critical care professionals should consider limiting their use of these networks during the COVID-19 pandemic. ##### Assuntos Ansiedade/epidemiologia , Infecções por Coronavirus/psicologia , Pessoal de Saúde/psicologia , Pandemias , Pneumonia Viral/psicologia , Rede Social , Adulto , Anestesiologia , Infecções por Coronavirus/epidemiologia , Infecções por Coronavirus/terapia , Cuidados Críticos , Estudos Transversais , Medicina de Emergência , Feminino , França/epidemiologia , Pessoal de Saúde/estatística & dados numéricos , Humanos , Internet , Masculino , Pessoa de Meia-Idade , Pneumonia Viral/epidemiologia , Pneumonia Viral/terapia , Estudos Prospectivos , Inquéritos e Questionários 20. Sante Publique ; 32(2): 247-251, 2020 09 15. Artigo em Francês | MEDLINE | ID: mdl-32989954 ##### RESUMO The COVID-19 Coronavirus epidemic started in December 2019 in China, and progressed very quickly in France. Its consequences were the implementation of national measures such as the containment of the population, but also a disorganization of the healthcare system, in particular concerning oral care. Indeed, dental procedures produce aerosols which can be loaded with viral particles, and as such, constitute a major contamination route by the virus. At the request of the Conference of Deans of the Faculties of Odontology, the National College of University Dentists in Public Health (CNCDUSP) set up a working group in order to issue recommendations for oral care in the context of the COVID-19 epidemic, given the specific risks faced by practitioners. Considering the lack of awareness of the specifics of dentistry in the medical world and among decision-makers, and given the speed with which national measures to fight the epidemic were implemented, the recommendations of the CNCDUSP had to be drawn up rigorously and quickly before being released to the profession. They take into account epidemiological data related to the virus, the specificities of oral care, and thus propose protective measures for dental surgery professionals.The necessary adaptation of the healthcare system during an epidemic will certainly make it possible to learn lessons from this health crisis. ##### Assuntos Infecções por Coronavirus/epidemiologia , Assistência Odontológica/organização & administração , Epidemias , Pneumonia Viral/epidemiologia , França/epidemiologia , Humanos , Pandemias
{}
# Using a single harmonic oscillator to implement a quantum gate. Confusion over concept I'm trying to simulate a quantum gate operation in mathematica using a harmonic oscillator and I have some confusion with how the physical system relates to the theory. This may be a bit long winded but I hope the question is clear. The solution to the Schrodinger equation for the Harmonic Oscillator is given by: $$\langle x|\phi_k\rangle := \frac{1}{\sqrt{2^k k! \sqrt{\pi}\ }} .e^{-x^2/2} . \operatorname{HermiteH}[k,x]$$ where the $\text{HermiteH}[k,x]$ denotes the built in Mathematica function for finding the Hermite Polynomial at energy level $k$ and position $x$. (the fact I'm using mathematica is irrelevant to the question) These energy levels can used as a physical representation of the qubits (In reality, it is not sufficient to use just one harmonic oscillator but it serves the purpose to demonstrate the technique). The Hamiltonian for the oscillator is first defined as $\textit{H} = \hbar \omega a^{\dagger} a$. Using the Hamiltonian operator, we can then define the unitary time evolution operator $U(t) = e^{i\textit{H}t/\hbar}$ which determines the evolution of the system over time. I have picked an example from Nielsen and Chuang's book on using a single harmonic oscillator to implement a CNOT gate. By choosing an appropriate time interval, this operator can have the desired effect we require in order to implement a CNOT gate (or any other quantum gate) operation. Now we must choose an appropriate representation for the qubits so that the time evolution operator will simulate the appropriate quantum gate. In the case of a CNOT gate, we would like our unitary operator to transform the qubit pairs in the following way: $$|00\rangle \rightarrow |00\rangle$$ $$|01\rangle \rightarrow |01\rangle$$ $$|10\rangle \rightarrow |11\rangle$$ $$|11\rangle \rightarrow |10\rangle \, .$$ These two qubits are then encoded by mapping them onto the following harmonic oscillator states: $$|00 \rangle = |\phi_0 \rangle$$ $$|01 \rangle = |\phi_2 \rangle$$ $$|10 \rangle = \frac{ 1}{\sqrt{2}} (|\phi_4\rangle + |\phi_1\rangle)$$ $$|11 \rangle = \frac{ 1}{\sqrt{2}} (|\phi_4\rangle - |\phi_1\rangle) \, .$$ At the start time $t=0$, the system will be in a state spanned by these basis states and then if we evolve the system forward to an appropriate time, in this case $t = \frac{\pi}{\omega}$, then the energy eigenstates of the oscillator will undergo the transformation: $$|\phi_k\rangle \rightarrow e^{-i \pi a^{\dagger} a} |\phi_k\rangle = (-1)^k |\phi_k\rangle \, .$$ This means that even values of $k$ will remain unchanged whereas odd values will pick up a minus sign $(|\phi_1\rangle \rightarrow -|\phi_1\rangle)$ and thus we obtain the required transformation. So finally here is the question: If the qubits are represented by different energy levels of the harmonic oscillator, then where does the position come into play? ( as in what can be used as an $x$ value in the equation). The book just says that we map the qubits to the energy levels but I assume that the position needs to be defined. • Welcome to Physics Stack Exchange. This is a great site for physics questions and answers. We have certain guidelines to keep the quality high and help make sure that questions get good answers. It's important to ask one specific question per post. Asking multiple questions, as this post does, makes it much less likely to get an answer. Suppose the probability that a user can answer a question is 0.01. If the post asks $n$ questions, then the probability that a user can answer it goes to 1 in a million. I would recommend that you edit this post to ask just one of the three questions. – DanielSank Aug 10 at 14:49 • I edited this to just include the most important question. – Dominic Brennan Aug 10 at 15:19 • Well, you already wrote down the wave functions for the various energy states. Doesn't that define the relationship between the states and the position variable? I'm not sure what else you're looking for. – DanielSank Aug 10 at 15:50 • Hi, I replied to the other comment below and I think that frames the confusion better. – Dominic Brennan Aug 11 at 17:33 The value of the position (represented here by $x$) is not really relevant for the operation of your gate. One way to look at it is that the gate operator $U(t)$ simply takes the state vector $|\phi_k\rangle$ to $(-1)^k |\phi_k\rangle$. At first glance, I would immediately assume from the notation that everything that has to do with $x$ is "absorbed" into the state vector, and we will see in a minute explicitly that it is. Qualitatively, the observable you care about is the energy level of the oscillator, not the position. While we usually define $a$ and $a^\dagger$ using $x$ and $p$, I'm not aware of any requirement that we do that; $H$ here is diagonal in the energy basis, without any required reference to position. To see this more explicitly using the $x$ basis, try acting with $U(t)$ on the wavefunction. If you write out $e^{ia^\dagger a t / \hbar}$ using $a = \sqrt{\frac{m\omega}{2\hbar}} ( \hat{x} + \frac{i}{m\omega} \hat{p})$ you will find a factor of $e^{x^2}$, but that gets canceled out by the $e^{-x^2}$ in the wavefunction itself, leaving no remaining x-dependence in what remains of the time evolution operator. • Sorry that previous comment was not finished. Basically what I'm trying to do is simulate a quantum gate using a real physical system as you would in a real experiment. I hope to start with a single harmonic oscillator and then move on to more complicated systems. I thought that because the solution to the sch. eq. is <x|$\phi_k$> then you must use |00>=<x|$\phi_0$> ,|01>=<x|$\phi_2$> etc. as the values of the qubits. I guess I'm missing a fundamental understanding of what the different between |$\phi_k$> and <x|$\phi_k$> is. – Dominic Brennan Aug 11 at 17:27 • Thanks for following up. I guess a source of confusion was that |$\phi_k$> seemed like a math tool that I didn't know how to relate to anything physical, just some notation that denotes the entire state. Whereas with <x|$\phi_k$>, I can visualise that by plotting the different wavefunctions at each energy level. So basically I thought you must use position (or momentum etc.) when dealing with something 'real'. I realise now that this is a flawed way of thinking and working with the state vector is just as valid as choosing a particular basis representation to work in (position, momentum...). – Dominic Brennan 15 hours ago
{}
Materials: # Homogenization of the Neumann’s brush problem François Murat (Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie and CNRS) Thursday, 10 November 2016 15:40 401 Homogenization occurs in partial differential equations when one considers a sequence of equations which describe very heterogeneous media. I will begin by recalling two classical cases of such a situation, the case where the coefficients widely vary because the body is made of a mixture or different materials, and the case where the domain is perforated by a lot of small holes with Dirichlet's boundary condition on this very fragmented boundary, and I will present in each case the result of this process. After this introduction, I will specifically consider another problem of the same family: the problem of the Neumann's brush. This is the case of a domain which has the form of a brush (in dimension N = 3) or of a comb (in dimension N = 2), i.e. which is composed of cylindrical vertical teeth distributed over a fixed basis. All the teeth have the same fixed height, but their cross sections can vary from one teeth to another one and the teeth can be adjacent, i.e. they can share parts of their boundaries. The diameter of every tooth is supposed to be less than or equal to some parameter epsilon which tends to zero, and the asymptotic volume fraction of the teeth is supposed to be bounded from below away from zero. No periodicity is assumed on the distribution of the teeth. In this widely varying domain one studies the asymptotic behavior of heath conduction, namely the solution of the Laplace equation with a zeroth order term, when the Neumann boundary condition is imposed on the whole of this complicated boundary. I will revisit this problem in the light of a recent work of Antonio Gaudiello (Naples, Italy), Olivier Guibe (Rouen, France), and myself, explaining how the transmission of the heat behaves in the teeth when the source term belongs to $L^2$. This is a classical problem but our homogenization result takes place in a geometry which is more general that the ones which were considered before. Moreover, we obtain a corrector result which is new. This is proved by using a very simple test function. Finally, if time permits, I will consider the case where the source term belongs to $L^1$, which motivated our work. Working in the framework of renormalized solutions, and introducing a definition of renormalized solutions for degenerate elliptic equations where only the vertical derivative is involved (such a definition is new), we are able to identify the limit problem and to prove a corrector result.
{}
1. ## Normal distribution question Scores on an examination are assumed to be normally distributed with a mean 78 and variance 36. If it is known that a students score exceeds 72, what is the probability that her score exceeds 84? Let Y denote the score, then P(Y>84 | Y>72) = P(z > .16 | z > -.16) I am stuck here and not sure what to do. 2. Originally Posted by vexiked Scores on an examination are assumed to be normally distributed with a mean 78 and variance 36. If it is known that a students score exceeds 72, what is the probability that her score exceeds 84? Let Y denote the score, then P(Y>84 | Y>72) = P(z > .16 | z > -.16) I am stuck here and not sure what to do. This is conditional probability, so we see that $\mathbb{P}\!\left(z>.16|z>-.16\right)=\frac{\mathbb{P}\!\left((z>-.16)\cap (z>.16)\right)}{\mathbb{P}\!\left(z>-.16\right)}=\frac{\mathbb{P}\!\left(z>.16\right)}{ \mathbb{P}\!\left(z>-.16\right)}$ Can you take it from here? 3. So looking up .16 in the table gives .4364/.4364 giving us a value of 1. This is where I am confused. 4. Originally Posted by vexiked So looking up .16 in the table gives .4364/.4364 giving us a value of 1. This is where I am confused. The denominator is NOT Pr(Z > 0.16), it's Pr(Z > -0.16).
{}
# Start a new discussion ## Not signed in Want to take part in these discussions? Sign in if you have an account, or apply for one below ## Site Tag Cloud Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support. • CommentRowNumber1. • CommentAuthorUrs • CommentTimeFeb 28th 2019 • Daniel Dugger, Daniel Isaksen, $\mathbb{Z}/2$-equivariant and R-motivic stable stems, Proceedings of the American Mathematical Society 145.8 (2017): 3617-3627 (arXiv:1603.09305) exposition in Daniel Dugger, Motivic stable homotopy groups of spheres (pdf) • CommentRowNumber2. • CommentAuthorUrs • CommentTimeFeb 28th 2019 I have added (here) the table of $\mathbb{Z}/2$-equivariant stable homotopy groups of spheres in low bi-degree from slide 18 of Dugger 08, summarizing Araki-Iriye 82. • CommentRowNumber3. • CommentAuthorDavidRoberts • CommentTimeFeb 28th 2019 I’m trying to figure out where in the table the group in this comment on yours fits. I don’t quite understand the description of the bigrading convention, since both the source and the target are smashes of spheres with sign reps, in the case you want. • CommentRowNumber4. • CommentAuthorDavid_Corfield • CommentTimeMar 1st 2019 What happens to the RO(G)-grading idea with the move to global equivariance? Does Peter May’s warning that it is “not the thing most intrinsic to the mathematics” need to be heeded? Presumably something intrinsic is being a stable object in the kind of equivariant $(\infty, 1)$-topos being sought here. Then what is the ’logic’ of those motivic spectra? • CommentRowNumber5. • CommentAuthorUrs • CommentTimeMar 1st 2019 • (edited Mar 1st 2019) I’m trying to figure out where in the table the group in this comment on yours fits. I don’t quite understand the description of the bigrading convention, since both the source and the target are smashes of spheres with sign reps, in the case you want. Right, so degrees on the right appear as negatives of degrees on the left. So the stable class of a map $S^{ 5_{sgn} } \longrightarrow S^{ 3 + 1_{sgn} }$ is an element which in Araki-Iriye’s convention is in $\pi^S_{ 4, -3 }$ with $p = 5 - 1 = 4$ the net dimension of sign reps, and $q = 0 - 3 = -3$ the net dimension of trivial reps. Hence in the table this is the entry $(p+q,p) = (1,4)$, where we have the equivariant stable homotopy group $(\mathbb{Z}/2)^2$. So at least after stabilization, the answer to my previous question is “No, there is no non-torsion stuff in that degree”. After I had written this question I realized that I had looked at this stuff in Araki-Iriye long ago already, then forgotten about it. But then I discovered this table by Dugger summarizing their results, which increases the usability by some orders of magnitude. This way I learned that I had been asking not quite the right question. What I wanted to see is in which $RO(\mathbb{Z}/2)$-degree of total dimension 4 we can see the charges of the MO5/M5-bound states, which turn out to be perfectly captured by the Cohomotopy of MO5-singularities in RO-degree $5_{sgn}$. To see this also in degree-4 Cohomotopy, subject to the constraint that orientation behaviour is respected, we find from the table now that we can either map $S^{5_{sgn} + 1} \longrightarrow S^{ 3_{sgn} + 1 }$ or $S^{5_{sgn} + 3} \longrightarrow S^{ 1_{sgn} + 3 }$. (This is an equivariant analog to how the 4-sphere sees M2-brane charge a priori measured by $S^7$, due to the fact that there is a non-torsion element $S^7 \to S^4$. What we see here is that/how equivariantly the 4-sphere sees further brane species this way, here the MO5/M5-bound system.) This may all sound mysterious. I’ll be trying to write it out comprehensively and cleanly. But will be busy the next days. Tomorrow flying back from family vacation across continents, then Domenico will be visiting for a week and we’ll be busy with another project, then I am flying to Pittsburgh for a week. • CommentRowNumber6. • CommentAuthorUrs • CommentTimeMar 1st 2019 • (edited Mar 1st 2019) What happens to the RO(G)-grading idea with the move to global equivariance? Along the general lines laid out at orbifold cohomology and specifically the setup on slide 78 here I am looking at orbifolds $\mathcal{X}$ equipped with a faithful morphism to the delooping groupoid $\mathbf{B} Pin(5)^\flat$ of the discrete group underlying $Pin(5)$. Then equivariant cocycles are maps in the slice $\array{ \mathcal{X} && \longrightarrow&& S^4\sslash Pin(5)^\flat \\ & \searrow && \swarrow \\ && \mathbf{B} Pin(5)^\flat }$ Now $Pin(5)^{\flat}$ is of course not finite. But by $\mathcal{X}$ being an orbifold, we have that around any of its singularities, any given morphism on the left will factor through the inclusion (under $\mathbf{B}$) of a finite subgroup $G$ of $Pin(5)$. By pulling back along that inclusion in the above triangle diagram, we see then that in the vicinity of that singularity the “global” cocycle reduces to one in $G$-equivariant cohomology. This took me a while to understand: That an orbifold regarded in the faithful slice in this fashion has attached to each of its singularities the further information of which “kind of charge” may be found inside this singularity, namely the choice of how the isotropy group of the singularity is to act on the coefficient 4-sphere. This is an effect of an aspect of “global” homotopy theory. • CommentRowNumber7. • CommentAuthorDavidRoberts • CommentTimeMar 1st 2019 Thanks, Urs. I will be free to focus more on this stuff in a week and a bit. • CommentRowNumber8. • CommentAuthorDavid_Corfield • CommentTimeMar 1st 2019 Having been mired down in other things for a while, it’s great to revisit this story with all of its parts. You sent me to the one typo I saw Cohomotpy (slide 78)
{}
X QQ群:218834310 • 中国百强报刊 • 中国精品科技期刊 • 中国国际影响力优秀学术期刊 • 中文核心期刊要目总览 • 中国科技核心期刊 • 中国科学引文数据库源刊 • CN 62-1072/P • ISSN 1000-0240 • 创刊于1979年 • 主管单位:中国科学院 • 主办单位:中国科学院寒区旱区 •                  环境与工程研究所 •                  中国地理学会 • 冰冻圈与全球变化 • ### 基于SHAW模型的青藏高原唐古拉地区活动层土壤水热特征模拟 1. 1. 中国科学院 寒区旱区环境与工程研究所 青藏高原冰冻圈观测研究站, 甘肃 兰州 730000; 2. 中国科学院 寒区旱区环境与工程研究所 冰冻圈科学国家重点实验室, 甘肃 兰州 730000; 3. 中国科学院 寒区旱区环境与工程研究所, 甘肃 兰州 730000 • 收稿日期:2012-09-10 修回日期:2012-12-21 出版日期:2013-04-25 发布日期:2013-05-14 • 通讯作者: 赵林,E-mail:linzhao@lzb.ac.cn E-mail:linzhao@lzb.ac.cn • 作者简介:刘杨(1985-), 女, 蒙古族, 内蒙古呼和浩特人, 2011年在中国科学院寒区旱区环境与工程研究所获硕士学位. E-mail: liuyang_0924@126.com • 基金资助: 国家重大科学研究计划项目(2013CBA01803); 国家自然科学基金项目(41271081;41271086); 中国科学院"百人计划"项目(51Y251571)资助 ### Simulation of the Soil Water-Thermal Features within the Active Layerin Tanggula Region, Tibetan Plateau, by Using SHAW Model LIU Yang1,2,3, ZHAO Lin1,2,3, LI Ren1,2,3 1. 1. Cryosphere Research Station on the Qinghai-Tibet Plateau, Cold and Arid Regions Environmental and EngineeringResearch Institute, Chinese Academy of Scierces, Lanzhou Gansu 730000, China; 2. State Key Laboratory ofCryospheric Sciences, Cold and Arid Regions Environmental and Engineering Research Institute, ChineseAcademy of Scierces, Lanzhou Gansu 730000, China; 3. Cold and Arid Regions Environmental andEngineering Research Institute, Chinese Academy of Sciences, Lanzhou Gansu 730000, China • Received:2012-09-10 Revised:2012-12-21 Online:2013-04-25 Published:2013-05-14 Abstract: Using the data observed from the meteorology gradient tower and within the active layer at the Tanggula Observation Site on the Tibetan Plateau in 2007, combined with the SHAW model, the soil hydro and thermal features within the active layer were simulated. Meanwhile, three different programs of surface albedo have been tested during the process of simulation.Comparing the observed data with the three simulated values, it is revealed that the SHAW model could successfully simulate the surface energy fluxes and the soil temperature features within the active layer in permafrost regions, while the simulation of soil unfrozen water is not so good, but the simulated changing tendency seems better. In the simulation, it will have obvious improvements in the simulation of the surface energy flux and soil temperature and moisture within the active layer when the monthly average surface albedo of each month in a year is taken as the model input. After revising the model input parameters by using the computing results from a parameterization scheme of the surface albedo, the simulation results of the soil temperature and moisture within the active layer has improved significantly. However, there is not obvious advance in the simulation of the surface energy fluxes. Overall, the SHAW model has advantage in simulating the soil freezing and thawing process in permafrost regions of the Tibetan Plateau. It is an ideal land surface model used to study the hydro and thermal processes within the active layer in the permafrost regions with higher elevations. • P642.14
{}
# In any closed traverse, if the survey work is error free, then1. The algebraic sum of all the latitudes should be equal to zero.2. The algebraic sum of all the departures should be equal to zero.3. The sum of the northings should be equal to the sum of the southings.Which of the above statements are correct? Free Practice With Testbook Mock Tests ## Options: 1. 1 and 2 only 2. 1 and 3 only 3. 2 and 3 only 4. 1, 2 and 3 ### Correct Answer: Option 1 (Solution Below) This question was previously asked in ESE Civil 2016 Paper 2: Official Paper ## Solution: Concepts The latitude of a line is its perpendicular projection in the N-S direction. It is positive for northing and negative for southing. The departure of a line is its perpendicular projection in the E-W direction. It is positive for Easting and negative for Westing. Error in any traverse survey is given by the summation of all the latitude or departures i.e. $$\sum L$$ or $$\sum D.$$ A traverse is said to be closed, it is error-free. Therefore, for a closed traverse: $$\sum {\bf{L}} = 0\;{\bf{and}}\;\sum {\bf{D}} = 0$$ For the network to be error-free only the first two conditions, are necessary, e.i. $$\sum {\bf{L}} = 0\;{\bf{and}}\;\sum {\bf{D}} = 0$$ The third one is correct but not required as it can be an inference drawn from condition two. So, the appropriate answer is option A.
{}
# Logical Expressions¶ The basic form of an if statement is the word if followed by a logical expression, and then a colon. All the statements that are indented beneath the if (called the body of condition) are executed IF AND ONLY IF the logical expression is true. The following are examples of logical expressions: Expression Logical meaning a < b True if a is less than b a <= b True if a is less than or equal to b a > b True if a is greater than b a >= b True if a is greater than or equal to b a == b True if a is equal to b. (Two equals signs, to distinguish it from assignment) a != b True if a is not equal to b.
{}
## College Algebra (6th Edition) a. $1\%$ b. 69 years Exponential growth model: $A=A_{0}e^{kt} \qquad(k>0)$ ($A_{0}$ is the initial quantity, $A$ is the quantity after time t). ----------- a. Reading k directly from the model $A=4.3e^{0.01t},$ $k=0.01$, ( New Zealand's growth rate is $1\%$) $\mathrm{b}$. Using the given formula for doubling time, $t=\displaystyle \frac{\ln 2}{0.01}\approx$69.314718056 To the nearest whole year: 69 years.
{}
# Problems with multiple instances of see or seealso in index Suspect I'm missing something obvious here. I have multiple identical instances of see or seealso index entries. This cannot be avoided. Sometimes they are appropriately compacted, but other times they are not. I am using imakeidx, but the problem is identical with makeidx, and is also resistant to deleting temporary files and the number of runs. It manifests in various ways, and it is hard to produce a single MWE that shows all of the effects. However this MWE: \documentclass{article} %\usepackage{imakeidx} %\indexsetup{othercode=\footnotesize} %\makeindex[intoc=true,title=My Index,columnsep=25pt] \usepackage{makeidx} \makeindex \begin{document} text\index{cat}text\index{cat} text\index{dog}text\index{dog} text\index{cat|see{pig}}text\index{cat|see{pig}} text\index{dog|see{pig}}text\index{dog|see{pig}} text\index{Smith John|see{Smith Jack}} text\index{Smith John|see{Smith Jack}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results}\index{Fuchs!Annie|see{Blogs Annie}} \clearpage text\index{cat}text\index{cat} text\index{dog}text\index{dog} text\index{cat|see{pig}}text\index{cat|see{pig}} text\index{dog|see{pig}}text\index{dog|see{pig}} text\index{Smith John|see{Smith Jack}}text\index{Smith John|see{Smith Jack}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Peter} text\index{Blundell!Peter} text\index{Blundell!Aubrey} text\index{Blundell!Aubrey} \index{Blogs!Peter!results}\index{Blogs!Annie!results}\index{Fuchs!Annie|see{Blogs Annie}} \clearpage text\index{cat}text\index{cat} text\index{cat|see{pig}}text\index{cat|see{pig}} text\index{dog|see{pig}}text\index{dog|see{pig}} text\index{Smith John|see{Smith Jack}}text\index{Smith John|see{Smith Jack}} text\index{Blundell!Aubrey} text\index{Blundell!Aubrey} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results}\index{Fuchs!Annie|see{Blogs Annie}} \clearpage text\index{cat}text\index{cat} text\index{cat|see{pig}}text\index{cat|see{pig}} text\index{dog|see{pig}}text\index{dog|see{pig}} text\index{Blundell!Aubrey} text\index{Blundell!Aubrey} text\index{Smith John|see{Smith Jack}}text\index{Smith John|see{Smith Jack}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results}\index{Fuchs!Annie|see{Blogs Annie}} \clearpage text\index{cat}text\index{cat} text\index{cat|see{pig}}text\index{cat|see{pig}} text\index{dog|see{pig}}text\index{dog|see{pig}} text\index{Blundell!Aubrey} text\index{Blundell!Aubrey} text\index{Smith John|see{Smith Jack}}text\index{Smith John|see{Smith Jack}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Jack|see{Blundell John}} text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results}\index{Fuchs!Annie|see{Blogs Annie}} \printindex \end{document} Produces As seen, the see is replicated for cat and also for dog (but only partially for the latter where there are intervening actual index entries). Actual index entries for the same item are not always necessary to produce the effect, for example I see things like the below in a larger document, but not in the MWE: What am I doing wrong? • As far as I know, the see entry should appear only once. – egreg Aug 7 '15 at 13:43 • But usually when there are multiple instances it IS compacted (as seen in the MWE). In my case it is not possible for each see too appear only once (there are multiple chapters not all of which appear in the final document -- and collecting all see's uniquely in one place will result in some see index items that refer to items that do not exist) – Aubrey Blumsohn Aug 7 '15 at 13:47 • In my opinion, doing \index{cat}\index{cat|see{dog}} doesn't make sense; it should be seealso. And index{cat|seealso{dog}} should appear just once. – egreg Aug 7 '15 at 13:48 • Yes that is correct - but not the issue here as seealso is variably compacted and sometimes not compacted in the same way (in the above MWE you just get replicated displays of see also instead of see...). I have not managed to work out when it gets compacted and when it does not. – Aubrey Blumsohn Aug 7 '15 at 13:54 • @egreg Makeindex is agnostic to see. There is no difference to textbf or other page formatting commands for Makeindex. – Heiko Oberdiek Aug 7 '15 at 13:56 The see feature is nothing special, it's just a page formatting command for Makeindex, from the generated .ind file: \item dog, 1, \see{pig}{1}, 2, \see{pig}{2--5} \item Smith John, \see{Smith Jack}{1--5} It is only accidental that the see in the entry for "Smith John" appears only once. The reason is that the entries are merged to a page range. The pages are not visible in the output, because macro \see throws the second argument for the page away. The issue can be fixed by post-processing the .idx file. The following Perl filter script replaces the page number of each see index entry by $seepagenumber with default 9999, thus that the see entry is added at the end of the page list (increase 9999, if you have more pages). The perl script fix-see.pl acts as filter. That means, it reads from standard input and writes to standard output: #!/usr/bin/env perl use strict; use warnings; my$seepagenumber = 9999; while (<>) { s/{[^{}*]}$/{$seepagenumber}/ if /\|see{/; print; } __END__ The command sequence is then for file test.tex exemparily: pdflatex test ./fix-see.pl <test.idx >test.idx-fixed makeindex test.idx-fixed pdflatex test The result: ## Solution in LaTeX for package makeindex The Perl script can be avoided by changing the page number for see entries in TeX, when it writes the index entry. Example for the standard definition of the index writing command (latex.ltx/package makeidx): \usepackage{makeidx} \makeindex \newcommand*{\seepagenumber}{9999} \makeatletter \CheckCommand*{\@wrindex}[1]{% \protected@write\@indexfile{}{% \string\indexentry{#1}{\thepage}% }% \endgroup \@esphack } \renewcommand*{\@wrindex}[1]{% \protected@edef\idx@text{#1}% \expandafter\idx@test@see\idx@text|see\@nil{#1}% } \def\idx@test@see#1|see#2\@nil#3{% \protected@write\@indexfile{% \ifx\\#2\\% \else \let\thepage\seepagenumber \fi }{% \string\indexentry{#3}{\thepage}% }% \endgroup \@esphack } \makeatother ## Version for package imakeidx \usepackage{imakeidx} \indexsetup{othercode=\footnotesize} \makeindex[intoc=true,title=My Index,columnsep=25pt] \newcommand*{\seepagenumber}{9999} \makeatletter \CheckCommand\imki@wrindexentrysplit[3]{% \expandafter\protected@write\csname#1@idxfile\endcsname{}% {\string\indexentry{#2}{#3}}% } \CheckCommand\imki@wrindexentryunique[3]{% \protected@write\@indexfile{}% {\string\indexentry[#1]{#2}{#3}}% } \newif\if@IndexEntryWithSee \renewcommand\imki@wrindexentrysplit[3]{% \@DoesEntryContainsSee{#2}% \expandafter\protected@write\csname#1@idxfile\endcsname{% \if@IndexEntryWithSee \let\thepage\seepagenumber \fi }{% \string\indexentry{#2}{#3}% }% } \renewcommand\imki@wrindexentryunique[3]{% \@DoesEntryContainsSee{#2}% \protected@write\@indexfile{% \if@IndexEntryWithSee \let\thepage\seepagenumber \fi }{% \string\indexentry[#1]{#2}{#3}% }% } \newcommand*{\@DoesEntryContainsSee}[1]{% \protected@edef\@IndexEntryText{#1}% \expandafter\@CheckForSee\@IndexEntryText|see\@nil } \def\@CheckForSee#1|see#2\@nil{% \ifx\\#2\\% \@IndexEntryWithSeefalse \else \@IndexEntryWithSeetrue \fi } \makeatother • This looks to be a workable solution Heiko, but going back to the cause, I cannot follow when you say that "It is only accidental that the see in the entry for "Smith John" appears only once -- because the entries are merged to a page range. The Smith John see actually does appear on every page, and should not therefore be compacted if this is the reason. – Aubrey Blumsohn Aug 7 '15 at 13:59 • @AubreyBlumsohn "accidental" here means, it depends on the page numbers and configuration, which gets merged to a page range. The entry for "Smith John" appears on all pages 1 to 5, therefore makeindex merges them to the page range "1--5", see the .ind file. – Heiko Oberdiek Aug 7 '15 at 14:01 • Ah, I see... No wonder I was having such problems tying the problem down. I am going to accept your answer over egreg despite the fact that both produce excellent diagnoses. It seems to me that the solution cannot be to be forced to collect all see's in one place and to de-duplicate them, because this is not always feasible (it makes adding or subtracting text difficult without ending up with orphan sees that point to nonexistent entries, and makes to hard to add or subtract whole chapters without a complex editing exercise on the see-list). – Aubrey Blumsohn Aug 7 '15 at 14:08 Entries with see or seealso should appear only once and at the end of the document, just before \printindex: \documentclass{article} %\usepackage{imakeidx} %\indexsetup{othercode=\footnotesize} %\makeindex[intoc=true,title=My Index,columnsep=25pt] \usepackage{makeidx} \makeindex \begin{document} text\index{cat}text\index{cat} text\index{dog}text\index{dog} text text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results} \clearpage text\index{cat}text\index{cat} text\index{dog}text\index{dog} text text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results} \clearpage text\index{cat}text\index{cat} text\index{dog}text\index{dog} text text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results} \clearpage text\index{cat}text\index{cat} text\index{dog}text\index{dog} text text\index{Blundell!Peter} text\index{Blundell!Peter} \index{Blogs!Peter!results}\index{Blogs!Annie!results} \clearpage \index{cat|seealso{pig}} \index{dog|seealso{pig}} \index{Smith John|see{Smith Jack}} \index{Fuchs!Annie|see{Blogs Annie}} \index{Blundell!Jack|see{Blundell John}} \printindex \end{document}
{}
# Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI? Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI? Since these discussions of interpretations of quantum mechanics often shed more heat than light, I want to state some clear definitions. standard qm = linearity; observables are self-adjoint operators; wavefunction evolves unitarily; complete sets of observables exist MWI-lite = synonym for standard qm MWI-heavy = standard qm plus various statements about worlds and branching CI = standard qm plus an additional axiom describing a nonunitary collapse process associated with observation Many people who have formulated or espoused MWI-heavy or CI seem to have made statements that branching or collapse would be an instantaneous process. (Everett and von Neumann seem to have subscribed to this.) In this case, MWI-heavy and CI would be vulnerable to falsification if it could be proved that the relevant process was not instantaneous. Decoherence makes specific predictions about time scales. Are there experiments verifying predictions of the time-scale for decoherence that could be interpreted as falsifying MWI-heavy and CI (or at least some versions thereof)? I'm open to well-reasoned answers that cite recent work and argue, e.g., that MWI-heavy and MWI-lite are the same except for irrelevant verbal connotations, or that processes like branching and collapse are inherently unobservable and therefore statements about their instantaneous nature are not empirically testable. It seems possible to me that the instantaneousness is: • not empirically testable even in principle. • untestable for all practical purposes (FAPP). • testable, but only with technologies that date to ca. 1980 or later. An example somewhat along these lines is an experiment by Lee at al. ("Generation of room-temperature entanglement in diamond with broadband pulses", can be found by googling) in which they put two macroscopic diamond crystals in an entangled state and then detected the entanglement (including phase) in 0.5 ps, which was shorter than the 7 ps decoherence time. This has been interpreted by Belli et al. as ruling out part of the parameter space for objective collapse models. If the coherence times were made longer (e.g., through the use of lower temperatures), then an experiment of this type could rule out the parameters of what is apparently the most popular viable version of this type of theory, GRW. Although this question isn't about objective collapse models, this is the same sort of general thing I'm interested in: using decoherence time-scales to rule out interpretations of quantum mechanics. • In my textbook of old (Cohen-Tannoudji & co, 1977), the wave function collapse upon measurement is one of the postulates of quantum mechanics. There is no notion of "collapse process" (that's the measurement problem), but it is clear that in standard QM measurement in not unitary. – Stéphane Rollandin Dec 21 '17 at 8:32 • I think I can offer a lot of practical information about this because my field (superconducting quantum electronic circuits) routinely does all kinds of experiments where the system is highly controlled and where the measurement strength itself is variable, i.e. we can extract as much or as little information as we want, i.e. we can choose to partially or completely collapse the wave function. Before I invest in an answer (which is going to take a considerable amount of time and thought) I want to encourage you to tighten up the question. (cont.) – DanielSank Dec 29 '17 at 1:26 • Asking if some versions of CI are rejected is loosey goosey because we can always imagine a variant of e.g. CI that disagrees with some piece of recent-ish experimental data. I'd rather answer this post if I knew exactly what I'm trying to argue. I might try to first reflect the posted question in a more addressable form and then answer it, but I'd rather OP did the first part for me ;-) – DanielSank Dec 29 '17 at 1:28 • @Rococo See, this is why I want the question tightened up, and I'm afraid that juicy 500 pt bounty is going to go to a somewhat vague answer. Sure, given a so-called "weak measurement" maybe you can cook up a new observable that is actually the subject of a strong measurement, but is that really in line with what most people are thinking about when they talk about CI? I don't know! – DanielSank Dec 29 '17 at 19:12 • @Rococo: if you happen to have any sources for the sentiment that "Decoherence has been described as 'CI done right' " Thanks, googling shows that it's actually consistent histories that people refer to as CI done right. I've edited the question to remove the incorrect statement. – user4552 Dec 30 '17 at 17:56 I am not aware of any experimental evidence, so this probably does not qualify as an answer. However I can offer a reference that addresses this question theoretically: • Armen E. Allahverdyan, Roger Balian, Theo M. Nieuwenhuizen (2011) Understanding quantum measurement from the solution of dynamical models, https://arxiv.org/abs/1107.2138 and by the same group, but more recently: Essentially they do what the OP describes in the question. They take a dynamical model of a macroscopic system and solve its unitary evolution within the Schrödinger equation. Then they try to look if some "measurement-like structure" emerges just from the many-body dynamics, without collapse. There is one main difference to decorence, where usually only a system and an environment is considered (e.g. the Leggett-Caldeira model, also cf. wiki article on quantum dissipation). In the work mentioned above, a macroscopic system that mimics a detector is included. Like the environment this is also a macroscopic system, but unlike the environment it has some special properties that allow it to record information. In the first paper this is done by considering a ferro-magnet, whose spontaneous symmetry breaking allows it to have a macroscopic polarization, which is essentially a deterministic property after equilibration (simply because the flip probability is very low). As far as I am aware this is far from a solution to the measurement problem, some open issues are mentioned in the articles themselves. At least it goes into the right direction however, especially it starts addressing the question of measurement timescales, which can maybe also pave the way for experimental investigations thereof. • This is directly on topic, and is about as close to a definitive answer as I could have hoped for. Some relevant portions of the long 2017 review article by Allahverdyan are pp. 123-126 (discussion of time scales), p. 155 (table of steps and time scales), 164 (fast initial truncation distinguished from slow irreversible truncation via decoherence), 168ff (CI and MWI). I asked for experimental data, but this toy model is in some ways nicer, because it is mathematically tractable, and it allows you to play with parameters and investigate wildly different time scales. – user4552 Dec 30 '17 at 17:48 • @BenCrowell: A review and discussion of the articles by Allahverdyan et al. can be found in physicsoverflow.org/39123 – Arnold Neumaier Dec 31 '17 at 12:43 Do measurements of time-scales for decoherence disprove some versions of Copenhagen or MWI? No. From Decoherence on wikipedia (emphasis mine): Decoherence has been used to understand the collapse of the wavefunction in quantum mechanics. Decoherence does not generate actual wave function collapse. It only provides an explanation for the observation of wave function collapse, as the quantum nature of the system "leaks" into the environment. That is, components of the wavefunction are decoupled from a coherent system, and acquire phases from their immediate surroundings. A total superposition of the global or universal wavefunction still exists (and remains coherent at the global level), but its ultimate fate remains an interpretational issue. Specifically, decoherence does not attempt to explain the measurement problem. Rather, decoherence provides an explanation for the transition of the system to a mixture of states that seem to correspond to those states observers perceive. As Wolpertinger said, to disprove Copenhagen or MWI you should challenge the postulate that the measurement act is instantaneous, by taking into account both detector and probe. I'm not an expert on this, so I cannot add much. I just wanted to point out that decoherence is not enough to solve the measurement problem. Some further relevant quotes: The discontinuous "wave function collapse" postulated in the Copenhagen interpretation to enable the theory to be related to the results of laboratory measurements cannot be understood as an aspect of the normal dynamics of quantum mechanics via the decoherence process. Decoherence is an important part of some modern refinements of the Copenhagen interpretation. Decoherence shows how a macroscopic system interacting with a lot of microscopic systems (e.g. collisions with air molecules or photons) moves from being in a pure quantum state—which in general will be a coherent superposition (see Schrödinger's cat)—to being in an incoherent improper mixture of these states. [...] However, decoherence by itself may not give a complete solution of the measurement problem, since all components of the wave function still exist in a global superposition, which is explicitly acknowledged in the many-worlds interpretation. All decoherence explains, in this view, is why these coherences are no longer available for inspection by local observers. To present a solution to the measurement problem in most interpretations of quantum mechanics, decoherence must be supplied with some nontrivial interpretational considerations [...] • There doesn't seem to be a close logical connection between the first two lines of the answer and the later material. This doesn't really seem like an answer to the question. – user4552 Dec 28 '17 at 18:17 • Decoherence is not directly related to the measurement problem. The OP was asking about using the collapse of the wave-function to rule out CI or MWI, using decoherence measurements. I will enlarge the quote to be more explicit. – Rexcirus Dec 28 '17 at 23:06 Do MWI-Heavy theories require collapse to be instantaneous? I'm not an expert on foundations of QM, but intuitively I wouldn't think it's mandatory. Isn't the essence of MWI the following: $(|0\rangle + |1\rangle )|\Psi \rangle\implies process \implies |0\rangle |\Psi_0 \rangle + |1\rangle |\Psi_1 \rangle$ For an observer, $|\Psi \rangle,$ making an observation is a process in which your observed outcomes are entangled with the state you wish to measure. After measurement, there's now an observer $|\Psi_0 \rangle$ measuring $|0 \rangle$ and an observer $|\Psi_1 \rangle$ measuring $|1 \rangle$. Checking which outcome you observe is equivalent to checking which universe you are in. The process of checking which universe you're in (verifying which observable you have) is an instantaneous process after the entanglement procedure, yes. But if wave-function collapse takes time, isn't this, in the MWI-heavy case, equivalent to having a time-dependent process entangling the observer with the state? $(|0\rangle + |1\rangle )|\Psi \rangle\implies process(t) \implies |\alpha(t)\rangle |\Psi_0(t) \rangle + |\beta(t)\rangle |\Psi_1(t) \rangle$ Cutting the measurement process short (maybe by making a slow measurement, which then could be observed by interrupting with a fast measurement) would entangle the observer to states that are in some superposition of $|0\rangle$ and $|1\rangle$. So your fast measurement would then give you a distribution associated with this superposition state instead of the original state. This would give you some probabilistic information about which branch you have transitioned to in the slow measurement, but unless a full measurement is made it's simply a particular likelihood. Doing some research it seems that the current debate on interpretations of QM involves a lot of discussion of the extended Wigner's friend thought experiment. Some think that the thought experiment shows that single-world theories cannot be consistent. Other's disagree. But even those who think the measurement problem is still an open question believe that CI theories can be ruled out: It is clear that experiments that show increasingly large coherences can narrow the parameter regime in which spontaneous collapse theories might exist, but there is an enormous gap between current experiments and co- herence experiments on truly macroscopic objects... The Wigner’s-friend experiment can (in principle) discriminate between two competing quantum formalisms describing a measurement — the unitary relative-state formalism and the non-unitary measurement update rule. A specific combination of these two formalisms, together with the assumption regarding possible communication, gives a contradiction. We do, however, not regard a formalism to necessarily imply a particular interpretation like “many worlds” or “collapse.” We believe that the contradiction above does, therefore, not disqualify a particular interpretation of quantum mechanics. So CI imposes restrictions that are falsifiable, while MW-heavy theories do not require this and, if anything, get stronger with such experiments. EDIT: As said in the comments, there theories I'm referring to are not quantum mechanics with one axiom, but very specifically specify when collapse can happen. • increasingly large coherences can narrow the parameter regime in which spontaneous collapse theories might exist This is referring to objective collapse theories, which have adjustable parameters. It's not about CI. It's the sort of thing discussed in the paper by Belli et al. that I linked to in the question. – user4552 Dec 28 '17 at 18:20 • I don't think that link Belli et al. is complete enough for someone not that familiar with foundations of QM to be on the same page, but I went ahead and took out the parts in which I confused your "CI" with spontaneous collapse theories. Let me know about the other parts. – Steven Sagona Dec 29 '17 at 0:58
{}
# zbMATH — the first resource for mathematics ## Bickel, Peter John Compute Distance To: Author ID: bickel.peter-john Published as: Bickel, Peter J.; Bickel, P. J.; Bickel, Peter; Bickel, P.; Bickel, Peter John Homepage: http://www.stat.berkeley.edu/~bickel/ External Links: MGP · Wikidata · Math-Net.Ru · dblp · GND · IdRef Documents Indexed: 189 Publications since 1964, including 11 Books Biographic References: 2 Publications all top 5 #### Co-Authors 32 single-authored 24 Ritov, Ya’acov 13 van Zwet, Willem Rutger 12 Yahav, Joseph A. 9 Levina, Elizaveta 7 Doksum, Kjell A. 7 Lehmann, Erich Leo 6 Chen, Aiyou 5 Götze, Friedrich W. 4 Huang, Haiyan 4 Rosenblatt, Murray 4 Sakov, Anat 3 Bahadur, Raghu Raj 3 Brown, James B. 3 Bühlmann, Peter 3 Chibisov, Dmitriĭ Mikhaĭlovich 3 Freedman, David A. 3 Hodges, Joseph L. jun. 3 Klaassen, Chris A. J. 3 Kwon, Jaimyoung 3 Rice, John A. 3 Rydén, Tobias 3 Sarkar, Purnamrita 3 Stoker, Thomas M. 3 Wang, Ying Xiang Rachel 3 Wellner, Jon August 3 Zhu, Ji 2 Albers, Willem 2 Bhattacharyya, Sharmodeep 2 Boley, Nathan 2 Eisen, Michael B. 2 El Karoui, Noureddine 2 Herzberg, Agnes Margaret 2 Kechris, Katherina J. 2 Kleijn, Bas J. K. 2 Krieger, Abba M. 2 Li, Bo 2 Li, Qunhua 2 Nair, Vijayan N. 2 Olshen, Richard Allen 2 Ren, Jian-Jian 2 Rothman, Adam J. 2 van Zwet, Erik W. 1 Aït-Sahalia, Yacine 1 Amini, Arash Ali 1 Andrews, David F. 1 Aswani, Anil 1 Atherton, Juli 1 Bai, Chongen 1 Bean, Derek 1 Bengtsson, Thomas 1 Berk, Richard A. 1 Berk, Robert H. 1 Bhattacharjee, Manish C. 1 Biggin, Mark D. 1 Blackwell, David Harold 1 Breiman, Leo 1 Brillinger, David R. 1 Brown, Ben 1 Buyske, Steven G. 1 Cai, Mu 1 Campbell, Katherine 1 Chang, Huahua 1 Chang, Xiangyu 1 Chen, Chao 1 Chernoff, Herman 1 Choi, David S. 1 Collins, John R. 1 Cosman, Pamela C. 1 Davidson, Stuart M. 1 Diaconis, Persi Warren 1 El Karoui, Nicole 1 Fan, Jianquin 1 Feldman, Lewis J. 1 Ferguson, Thomas S. 1 Fovell, Robert 1 Gamst, Anthony C. 1 Ge, Zhiyu 1 Gel, Yulia R. 1 Ghosh, Jayanta Kumar 1 Glazer, Alexander N. 1 Goodman, Leo A. 1 Hampel, Frank R. 1 Huber, Peter Jost 1 Ibragimov, Il’dar Abdullovich 1 Jewell, Nicholas P. 1 Jiang, Keni 1 Keller-McNulty, Sallie 1 Kelly, Elizabeth 1 Kim, Namhyun 1 Kundaje, Anshul 1 Kur, Gil 1 Le Cam, Lucien Marie 1 Lei, Jing 1 Lei, Lihua 1 Lim, Chinghway 1 Lindner, Marko 1 Linn, Rodman 1 Linton, Oliver Bruce 1 Lo, Albert Y. 1 Lockhart, Richard A. ...and 50 more Co-Authors all top 5 #### Serials 47 The Annals of Statistics 16 Annals of Mathematical Statistics 9 The Annals of Applied Statistics 8 Statistica Sinica 6 Journal of the American Statistical Association 5 Proceedings of the National Academy of Sciences of the United States of America 4 Statistical Science 3 International Statistical Review 3 Journal of Statistical Planning and Inference 3 Sankhyā. Series A. Methods and Techniques 3 Bernoulli 2 Scandinavian Journal of Statistics 2 The Annals of Probability 2 Probability and Mathematical Statistics 2 Test 2 Journal of the Royal Statistical Society. Series B. Statistical Methodology 2 Chapman & Hall/CRC Texts in Statistical Science Series 1 Acta Mathematica Academiae Scientiarum Hungaricae 1 Israel Journal of Mathematics 1 Psychometrika 1 Russian Mathematical Surveys 1 Teoriya Veroyatnosteĭ i eë Primeneniya 1 Theory of Probability and its Applications 1 Journal of Econometrics 1 Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 1 Advances in Applied Mathematics 1 Statistics & Probability Letters 1 Probability Theory and Related Fields 1 Journal of Theoretical Probability 1 IEEE Transactions on Signal Processing 1 Notices of the American Mathematical Society 1 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 1 Computational Statistics and Data Analysis 1 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Extremes 1 Probability in the Engineering and Informational Sciences 1 Mathematical Geology 1 Journal of Machine Learning Research (JMLR) 1 Statistical Applications in Genetics and Molecular Biology 1 Comptes Rendus Hebdomadaires des Séances de l’Académie des Sciences, Série A 1 Johns Hopkins Series in the Mathematical Sciences 1 Lecture Notes in Mathematics 1 Electronic Journal of Statistics 1 Sankhyā. Series A all top 5 #### Fields 160 Statistics (62-XX) 33 Probability theory and stochastic processes (60-XX) 13 Numerical analysis (65-XX) 8 Biology and other natural sciences (92-XX) 6 History and biography (01-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 General and overarching topics; collections (00-XX) 2 Combinatorics (05-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Approximations and expansions (41-XX) 2 Geophysics (86-XX) 2 Operations research, mathematical programming (90-XX) 1 Real functions (26-XX) 1 Operator theory (47-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Computer science (68-XX) 1 Astronomy and astrophysics (85-XX) 1 Systems theory; control (93-XX) 1 Information and communication theory, circuits (94-XX) #### Citations contained in zbMATH Open 153 Publications have been cited 6,023 times in 4,908 Documents Cited by Year Simultaneous analysis of Lasso and Dantzig selector. Zbl 1173.62022 Bickel, Peter J.; Ritov, Ya’acov; Tsybakov, Alexandre B. 2009 Some asymptotic theory for the bootstrap. Zbl 0449.62034 Bickel, Peter J.; Freedman, David 1981 Efficient and adaptive estimation for semiparametric models. Zbl 0786.62001 Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya’acov; Wellner, Jon A. 1993 On some global measures of the deviations of density function estimates. Zbl 0275.62033 Bickel, P. J.; Rosenblatt, M. 1973 Regularized estimation of large covariance matrices. Zbl 1132.62040 Bickel, Peter J.; Levina, Elizaveta 2008 Covariance regularization by thresholding. Zbl 1196.62062 Bickel, Peter J.; Levina, Elizaveta 2008 Mathematical statistics. Basic ideas and selected topics. Zbl 0403.62001 Bickel, Peter J.; Doksum, Kjell A. 1977 Convergence criteria for multiparameter stochastic processes and some applications. Zbl 0265.60011 Bickel, P. J.; Wichura, M. J. 1971 Robust estimates of location. Survey and advances. Zbl 0254.62001 Andrews, D. F.; Bickel, P. J.; Hampel, F. R.; Huber, P. J.; Rogers, W. H.; Tukey, J. W. 1972 On adaptive estimation. Zbl 0489.62033 Bickel, P. J. 1982 Sparse permutation invariant covariance estimation. Zbl 1320.62135 Rothman, Adam J.; Bickel, Peter J.; Levina, Elizaveta; Zhu, Ji 2008 Some theory for Fisher’s linear discriminant function, ‘naive Bayes’, and some alternatives when there are many more variables than observations. Zbl 1064.62073 Bickel, Peter J.; Levina, Elizaveta 2004 A nonparametric view of network models and Newman-Girvan and other modularities. Zbl 1359.62411 Bickel, Peter J.; Chen, Aiyou 2009 Efficient and adaptive estimation for semiparametric models. Zbl 0894.62005 Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya’acov; Wellner, Jon A. 1998 An analysis of transformations revisited. Zbl 0464.62058 Bickel, Peter J.; Doksum, Kjell A. 1981 Resampling fewer than $$n$$ observations: gains, losses, and remedies for losses. Zbl 0927.62043 Bickel, P. J.; Götze, F.; van Zwet, W. R. 1997 One-step Huber estimates in the linear model. Zbl 0322.62038 Bickel, P. J. 1975 Descriptive statistics for non-parametric models. III: Dispersion. Zbl 0351.62031 Bickel, P. J.; Lehmann, E. L. 1976 Asymptotic normality of the maximum-likelihood estimator for general hidden Markov models. Zbl 0932.62097 Bickel, Peter J.; Ritov, Ya’acov; Rydén, Tobias 1998 Estimating integrated squared density derivatives: Sharp best order of convergence estimates. Zbl 0676.62037 Bickel, P. J.; Ritov, Y. 1988 Descriptive statistics for nonparametric models IV. Spread. Zbl 0415.62015 Bickel, P. J.; Lehmann, E. L. 1979 Asymptotically optimal Bayes and minimax procedures in sequential estimation. Zbl 0187.16303 Bickel, P. J.; Yahav, J. A. 1968 A decomposition for the likelihood ratio statistic and the Bartlett correction - a Bayesian argument. Zbl 0727.62035 Bickel, Peter J.; Ghosh, J. K. 1990 Descriptive statistics for nonparametric models. II: Location. Zbl 0321.62055 Bickel, P. J.; Lehmann, E. L. 1975 Pseudo-likelihood methods for community detection in large sparse networks. Zbl 1277.62166 Amini, Arash A.; Chen, Aiyou; Bickel, Peter J.; Levina, Elizaveta 2013 Minimax estimation of the mean of a normal distribution when the parameter space is restricted. Zbl 0484.62013 Bickel, P. J. 1981 The Edgeworth expansion for U-statistics of degree two. Zbl 0614.62015 Bickel, P. J.; Götze, F.; van Zwet, W. R. 1986 Goodness-of-fit tests for kernel regression with an application to option implied volatilities. Zbl 1004.62042 Aït-Sahalia, Yacine; Bickel, Peter J.; Stoker, Thomas M. 2001 Tests for monotone failure rate based on normalized spacings. Zbl 0191.50504 Bickel, P. J.; Doksum, K. A. 1969 Curse-of-dimensionality revisited: collapse of the particle filter in very large scale systems. Zbl 1166.93376 Bengtsson, Thomas; Bickel, Peter; Li, Bo 2008 Edgeworth expansions in nonparametric statistics. Zbl 0284.62018 Bickel, P. J. 1974 On some robust estimates of location. Zbl 0192.25802 Bickel, P. J. 1965 Asymptotic expansions for the power of distribution free tests in the one-sample problem. Zbl 0321.62049 Albers, Willem; Bickel, P. J.; van Zwet, W. R. 1976 Sums of functions of nearest neighbor distances, moment bounds, limit theorems and a goodness of fit test. Zbl 0502.62045 Bickel, Peter J.; Breiman, Leo 1983 Using residuals robustly I: Tests for heteroscedasticity, nonlinearity. Zbl 0385.62029 Bickel, P. J. 1978 Asymptotic normality of maximum likelihood and its variational approximation for stochastic blockmodels. Zbl 1292.62042 Bickel, Peter; Choi, David; Chang, Xiangyu; Zhang, Hai 2013 The method of moments and degree distributions for network models. Zbl 1232.91577 Bickel, Peter J.; Chen, Aiyou; Levina, Elizaveta 2011 A distribution free version of the Smirnov two sample test in the p- variate case. Zbl 0179.48704 Bickel, P. J. 1969 On the choice of $$m$$ in the $$m$$ out of $$n$$ bootstrap and confidence bounds for extrema. Zbl 05361940 Bickel, Peter J.; Sakov, Anat 2008 The semiparametric Bernstein-von Mises theorem. Zbl 1246.62081 Bickel, P. J.; Kleijn, B. J. K. 2012 Some contributions to the asymptotic theory of Bayes solutions. Zbl 0167.17706 Bickel, P. J.; Yahav, J. A. 1969 Nonparametric estimators which can be “plugged-in”. Zbl 1058.62031 Bickel, Peter J.; Ritov, Ya’acov 2003 Inference for semiparametric models: Some questions and an answer. (With comments). Zbl 0997.62028 Bickel, Peter J.; Kwon, Jaimyoung 2001 Asymptotic normality and the bootstrap in stratified sampling. Zbl 0542.62009 Bickel, P. J.; Freedman, D. A. 1984 Achieving information bounds in non and semiparametric models. Zbl 0722.62025 Ritov, Y.; Bickel, P. J. 1990 Descriptive statistics for nonparametric models. I: Introduction. Zbl 0321.62054 Bickel, P. J.; Lehmann, E. L. 1975 Asymptotic expansions for the power of distributionfree tests in the two- sample problem. Zbl 0378.62047 Bickel, P. J.; van Zwet, W. R. 1978 Tailor-made tests for goodness of fit to semiparametric hypotheses. Zbl 1092.62050 Bickel, Peter J.; Ritov, Ya’acov; Stoker, Thomas M. 2006 Inference in hidden Markov models. I: Local asymptotic normality in the stationary case. Zbl 1066.62535 Bickel, Peter J.; Ritov, Ya’acov 1996 Efficient estimation in the errors in variables model. Zbl 0643.62029 Bickel, P. J.; Ritov, Y. 1987 Mathematical statistics. Basic ideas and selected topics. Volume I. 2nd ed. Zbl 1380.62002 Bickel, Peter J.; Doksum, Kjell A. 2015 A new mixing notion and functional central limit theorems for a sieve bootstrap in time series. Zbl 0954.62102 Bickel, Peter J.; Bühlmann, Peter 1999 Asymptotically pointwise optimal procedures in sequential analysis. Zbl 0214.45105 Bickel, P. J.; Yahav, J. A. 1967 On robust regression with high-dimensional predictors. Zbl 1359.62184 El Karoui, Noureddine; Bean, Derek; Bickel, Peter J.; Lim, Chinghway; Yu, Bin 2013 Robustness of design against autocorrelation in time I: Asymptotic theory, optimality for location and linear regression. Zbl 0403.62051 Bickel, P. J.; Herzberg, Agnes M. 1979 Some contributions to the theory of order statistics. Zbl 0214.46602 Bickel, P. J. 1967 On some analogues to linear combinations of order statistics in the linear model. Zbl 0265.62021 Bickel, P. J. 1973 Regularization in statistics. Zbl 1110.62051 Bickel, Peter J.; Li, Bo 2006 Efficient independent component analysis. Zbl 1114.62033 Chen, Aiyou; Bickel, Peter J. 2006 On some asymptotically nonparametric competitors of Hotelling’s $$T^ 2$$. Zbl 0138.13205 Bickel, P. J. 1965 Efficient estimation of linear functionals of a probability measure $$P$$ with known marginal distributions. Zbl 0742.62034 Bickel, Peter J.; Ritov, Ya’acov; Wellner, Jon A. 1991 On a semiparametric survival model with flexible covariate effect. Zbl 0953.62107 Nielsen, Jens P.; Linton, Oliver; Bickel, Peter J. 1998 Estimation in semiparametric models. Zbl 0795.62027 Bickel, P. J. 1993 Tests for monotone failure rate. II. Zbl 0191.50601 Bickel, P. J. 1969 Bootstrapping regression models with many parameters. Zbl 0529.62057 Bickel, P. J.; Freedman, D. A. 1983 On efficiency of first and second order. Zbl 0469.62038 Bickel, P. J.; Chibisov, D. M.; van Zwet, W. R. 1981 Parametric robustness: Small biases can be worthwhile. Zbl 0545.62028 Bickel, P. J. 1984 Texture synthesis and nonparametric resampling of random fields. Zbl 1246.62194 Levina, Elizaveta; Bickel, Peter J. 2006 Some problems on the estimation of unimodal densities. Zbl 0840.62038 Bickel, Peter J.; Fan, Jianquin 1996 Hypothesis testing for automated community detection in networks. Zbl 1411.62162 Bickel, Peter J.; Sarkar, Purnamrita 2016 Robust regression based on infinitesimal neighbourhoods. Zbl 0567.62051 Bickel, P. J. 1984 Consistent independent component analysis and prewhitening. Zbl 1373.62292 Chen, Aiyou; Bickel, Peter J. 2005 Role of normalization in spectral clustering for stochastic blockmodels. Zbl 1320.62150 Sarkar, Purnamrita; Bickel, Peter J. 2015 On some alternative estimates for shift in the P-variate on sample problem. Zbl 0214.45804 Bickel, P. J. 1964 Robustness of design against autocorrelation in time. II: Optimality, theoretical and numerical results for the first-order autoregressive process. Zbl 0505.62063 Bickel, P. J.; Herzberg, Agnes M.; Schilling, M. F. 1981 Renewal theory in the plane. Zbl 0138.40702 Bickel, P. J.; Yahav, J. A. 1965 Regression on manifolds: estimation of the exterior derivative. Zbl 1209.62063 Aswani, Anil; Bickel, Peter; Tomlin, Claire 2011 Another look at robustness: A review of reviews and some new developments. Zbl 0343.62039 Bickel, Peter J. 1976 The bootstrap in hypothesis testing. Zbl 1380.62183 Bickel, Peter J.; Ren, Jian-Jian 2001 An Edgeworth expansion for the $$m$$ out of $$n$$ bootstrapped median. Zbl 0969.62014 Sakov, Anat; Bickel, Peter J. 2000 Likelihood-based model selection for stochastic block models. Zbl 1371.62017 Wang, Y. X. Rachel; Bickel, Peter J. 2017 Two-dimensional random fields. Zbl 0297.60020 Bickel, P.; Rosenblatt, M. 1973 Unbiased estimation in convex families. Zbl 0197.44602 Bickel, P. J.; Lehmann, E. L. 1969 Banded regularization of autocovariance matrices in application to parameter estimation and forecasting of time series. Zbl 1228.62106 Bickel, Peter J.; Gel, Yulia R. 2011 Minimax estimation of the mean of a normal distribution subject to doing well at a point. Zbl 0545.62029 Bickel, P. J. 1983 Confidence bands for a distribution function using the bootstrap. Zbl 0695.62126 Bickel, P. J.; Krieger, A. M. 1989 Variable selection in nonparametric regression with categorical covariates. Zbl 0763.62019 Bickel, Peter; Zhang, Ping 1992 On an A.P.O. rule in sequential estimation with quadratic loss. Zbl 0175.17004 Bickel, Peter J.; Yahav, Joseph A. 1969 Some theory for generalized boosting algorithms. Zbl 1222.68148 Bickel, Peter J.; Ritov, Ya’acov; Zakai, Alon 2006 Richardson extrapolation and the bootstrap. Zbl 0664.62014 Bickel, Peter J.; Yahav, Joseph A. 1988 Quelques aspects de la statistique robuste. Zbl 0484.62053 Bickel, P. J. 1981 Subsampling bootstrap of count features of networks. Zbl 1326.62067 Bhattacharyya, Sharmodeep; Bickel, Peter J. 2015 Approximating the inverse of banded matrices by banded matrices with applications to probability and statistics. Zbl 1241.60017 Bickel, Peter John; Lindner, Marko 2012 Nonparametric inference under biased sampling from a finite population. Zbl 0767.62032 Bickel, Peter J.; Nair, Vijayan N.; Wang, Paul C. C. 1992 Inference and auditing: The Stringer bound. Zbl 0755.62039 Bickel, Peter J. 1992 Uniform convergence of probability measures on classes of functions. Zbl 0821.60002 Bickel, P. J.; Millar, P. W. 1992 The $$m$$ out of $$n$$ bootstrap and goodness of fit tests with double censored data. Zbl 0839.62054 Bickel, Peter J.; Ren, Jian-Jian 1996 Empirical Bayes estimation in functional and structural models, and uniformly adaptive estimation of location. Zbl 0609.62044 Bickel, P. J.; Klaassen, C. A. J. 1986 Inferring gene-gene interactions and functional modules using sparse canonical correlation analysis. Zbl 1454.62416 Wang, Y. X. Rachel; Jiang, Keni; Feldman, Lewis J.; Bickel, Peter J.; Huang, Haiyan 2015 Local asymptotic normality of ranks and covariates in transformation models. Zbl 0897.62017 Bickel, P. J.; Ritov, Y. 1997 Network modelling of topological domains using Hi-C data. Zbl 1433.62318 Wang, Y. X. Rachel; Sarkar, Purnamrita; Ursu, Oana; Kundaje, Anshul; Bickel, Peter J. 2019 Asymptotics for high dimensional regression $$M$$-estimates: fixed design results. Zbl 1406.62084 Lei, Lihua; Bickel, Peter J.; El Karoui, Noureddine 2018 Projection pursuit in high dimensions. Zbl 1416.62320 Bickel, Peter J.; Kur, Gil; Nadler, Boaz 2018 Likelihood-based model selection for stochastic block models. Zbl 1371.62017 Wang, Y. X. Rachel; Bickel, Peter J. 2017 Hypothesis testing for automated community detection in networks. Zbl 1411.62162 Bickel, Peter J.; Sarkar, Purnamrita 2016 Mathematical statistics. Basic ideas and selected topics. Volume II. 2nd edition. Zbl 1397.62003 Bickel, Peter J.; Doksum, Kjell A. 2016 Spectral clustering and block models: a review and a new algorithm. Zbl 1381.62152 Bhattacharyya, Sharmodeep; Bickel, Peter J. 2016 Mathematical statistics. Basic ideas and selected topics. Volume I. 2nd ed. Zbl 1380.62002 Bickel, Peter J.; Doksum, Kjell A. 2015 Role of normalization in spectral clustering for stochastic blockmodels. Zbl 1320.62150 Sarkar, Purnamrita; Bickel, Peter J. 2015 Subsampling bootstrap of count features of networks. Zbl 1326.62067 Bhattacharyya, Sharmodeep; Bickel, Peter J. 2015 Inferring gene-gene interactions and functional modules using sparse canonical correlation analysis. Zbl 1454.62416 Wang, Y. X. Rachel; Jiang, Keni; Feldman, Lewis J.; Bickel, Peter J.; Huang, Haiyan 2015 Correction to the proof of consistency of community detection. Zbl 1310.62109 Bickel, Peter J.; Chen, Aiyou; Zhao, Yunpeng; Levina, Elizaveta; Zhu, Ji 2015 The Bayesian analysis of complex, high-dimensional models: can it be CODA? Zbl 1331.62162 Ritov, Y.; Bickel, P. J.; Gamst, A. C.; Kleijn, B. J. K. 2014 Pseudo-likelihood methods for community detection in large sparse networks. Zbl 1277.62166 Amini, Arash A.; Chen, Aiyou; Bickel, Peter J.; Levina, Elizaveta 2013 Asymptotic normality of maximum likelihood and its variational approximation for stochastic blockmodels. Zbl 1292.62042 Bickel, Peter; Choi, David; Chang, Xiangyu; Zhang, Hai 2013 On robust regression with high-dimensional predictors. Zbl 1359.62184 El Karoui, Noureddine; Bean, Derek; Bickel, Peter J.; Lim, Chinghway; Yu, Bin 2013 On convergence of recursive Monte Carlo filters in non-compact state spaces. Zbl 1259.62086 Lei, Jing; Bickel, Peter 2013 The semiparametric Bernstein-von Mises theorem. Zbl 1246.62081 Bickel, P. J.; Kleijn, B. J. K. 2012 Approximating the inverse of banded matrices by banded matrices with applications to probability and statistics. Zbl 1241.60017 Bickel, Peter John; Lindner, Marko 2012 Resampling fewer than $$n$$ observations: gains, losses, and remedies for losses. Zbl 1373.62173 Bickel, P. J.; Götze, F.; van Zwet, W. R. 2012 A model for sequential evolution of ligands by exponential enrichment (SELEX) data. Zbl 1254.92025 Atherton, Juli; Boley, Nathan; Brown, Ben; Ogawa, Nobuo; Davidson, Stuart M.; Eisen, Michael B.; Biggin, Mark D.; Bickel, Peter 2012 The method of moments and degree distributions for network models. Zbl 1232.91577 Bickel, Peter J.; Chen, Aiyou; Levina, Elizaveta 2011 Regression on manifolds: estimation of the exterior derivative. Zbl 1209.62063 Aswani, Anil; Bickel, Peter; Tomlin, Claire 2011 Banded regularization of autocovariance matrices in application to parameter estimation and forecasting of time series. Zbl 1228.62106 Bickel, Peter J.; Gel, Yulia R. 2011 Measuring reproducibility of high-throughput experiments. Zbl 1231.62124 Li, Qunhua; Brown, James B.; Huang, Haiyan; Bickel, Peter J. 2011 Subsampling methods for genomic inference. Zbl 1220.62130 Bickel, Peter J.; Boley, Nathan; Brown, James B.; Huang, Haiyan; Zhang, Nancy R. 2010 Simultaneous analysis of Lasso and Dantzig selector. Zbl 1173.62022 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Tsybakov, Alexandre B. 2009 A nonparametric view of network models and Newman-Girvan and other modularities. Zbl 1359.62411 Bickel, Peter J.; Chen, Aiyou 2009 An overview of recent developments in genomics and associated statistical methods. Zbl 1185.62184 Bickel, Peter J.; Brown, James B.; Huang, Haiyan; Li, Qunhua 2009 Efficient blind search: optimal power of detection under computational cost constraints. Zbl 1161.62087 Meinshausen, Nicolai; Bickel, Peter; Rice, John 2009 Discussion of: Brownian distance covariance. Zbl 1454.62171 Bickel, Peter J.; Xu, Ying 2009 Regularized estimation of large covariance matrices. Zbl 1132.62040 Bickel, Peter J.; Levina, Elizaveta 2008 Covariance regularization by thresholding. Zbl 1196.62062 Bickel, Peter J.; Levina, Elizaveta 2008 Sparse permutation invariant covariance estimation. Zbl 1320.62135 Rothman, Adam J.; Bickel, Peter J.; Levina, Elizaveta; Zhu, Ji 2008 Curse-of-dimensionality revisited: collapse of the particle filter in very large scale systems. Zbl 1166.93376 Bengtsson, Thomas; Bickel, Peter; Li, Bo 2008 On the choice of $$m$$ in the $$m$$ out of $$n$$ bootstrap and confidence bounds for extrema. Zbl 05361940 Bickel, Peter J.; Sakov, Anat 2008 Sparsity and the possibility of inference. Zbl 1192.62113 Bickel, Peter J.; Yan, Donghui 2008 Random matrix theory: A program of the statistics and applied mathematical sciences institute (SAMSI). Zbl 1154.15311 Bickel, Peter 2008 Tailor-made tests for goodness of fit to semiparametric hypotheses. Zbl 1092.62050 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Stoker, Thomas M. 2006 Regularization in statistics. Zbl 1110.62051 Bickel, Peter J.; Li, Bo 2006 Efficient independent component analysis. Zbl 1114.62033 Chen, Aiyou; Bickel, Peter J. 2006 Texture synthesis and nonparametric resampling of random fields. Zbl 1246.62194 Levina, Elizaveta; Bickel, Peter J. 2006 Some theory for generalized boosting algorithms. Zbl 1222.68148 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Zakai, Alon 2006 Consistent independent component analysis and prewhitening. Zbl 1373.62292 Chen, Aiyou; Bickel, Peter J. 2005 Nonparametric testing of an exclusion restriction. Zbl 1119.62039 Bickel, Peter J.; Ritov, Ya&rsquo;akov; Stoker, Thomas M. 2005 Some theory for Fisher’s linear discriminant function, ‘naive Bayes’, and some alternatives when there are many more variables than observations. Zbl 1064.62073 Bickel, Peter J.; Levina, Elizaveta 2004 An approximate likelihood approach to nonlinear mixed effects models via spline approximation. Zbl 1429.62278 Ge, Zhiyu; Bickel, Peter J.; Rice, John A. 2004 Nonparametric estimators which can be “plugged-in”. Zbl 1058.62031 Bickel, Peter J.; Ritov, Ya&rsquo;acov 2003 The limit distribution of a test statistic for bivariate normality. Zbl 1015.62062 Kim, Namhyun; Bickel, Peter J. 2003 Hidden Markov model likelihoods and their derivatives behave like i. i. d. ones. (La vraisemblance des chaînes de Markov cachées se comporte comme celle des variables i. i. d.). Zbl 1011.62087 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Rydén, Tobias 2002 Workshop on statistical approaches for the evaluation of complex computer models. Zbl 1032.62102 Berk, Richard A.; Bickel, Peter; Campbell, Katherine; Fovell, Robert; Keller-McNulty, Sallie; Kelly, Elizabeth; Linn, Rodman; Park, Byungkyu; Perelson, Alan; Rouphail, Nagui; Sacks, Jerome; Schoenberg, Frederic 2002 Extrapolation and the bootstrap. Zbl 1192.62125 Bickel, Peter J.; Sakov, Anat 2002 Goodness-of-fit tests for kernel regression with an application to option implied volatilities. Zbl 1004.62042 Aït-Sahalia, Yacine; Bickel, Peter J.; Stoker, Thomas M. 2001 Inference for semiparametric models: Some questions and an answer. (With comments). Zbl 0997.62028 Bickel, Peter J.; Kwon, Jaimyoung 2001 The bootstrap in hypothesis testing. Zbl 1380.62183 Bickel, Peter J.; Ren, Jian-Jian 2001 On maximizing item information and matching difficulty with ability. Zbl 1293.62235 Bickel, Peter; Buyske, Steven; Chang, Huahua; Ying, Zhiliang 2001 An Edgeworth expansion for the $$m$$ out of $$n$$ bootstrapped median. Zbl 0969.62014 Sakov, Anat; Bickel, Peter J. 2000 Non- and semiparametric statistics: compared and contrasted. Zbl 0970.62017 Bickel, P. J.; Ritov, Y. 2000 A new mixing notion and functional central limit theorems for a sieve bootstrap in time series. Zbl 0954.62102 Bickel, Peter J.; Bühlmann, Peter 1999 Efficient and adaptive estimation for semiparametric models. Zbl 0894.62005 Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya&rsquo;acov; Wellner, Jon A. 1998 Asymptotic normality of the maximum-likelihood estimator for general hidden Markov models. Zbl 0932.62097 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Rydén, Tobias 1998 On a semiparametric survival model with flexible covariate effect. Zbl 0953.62107 Nielsen, Jens P.; Linton, Oliver; Bickel, Peter J. 1998 Resampling fewer than $$n$$ observations: gains, losses, and remedies for losses. Zbl 0927.62043 Bickel, P. J.; Götze, F.; van Zwet, W. R. 1997 Local asymptotic normality of ranks and covariates in transformation models. Zbl 0897.62017 Bickel, P. J.; Ritov, Y. 1997 Closure of linear processes. Zbl 0890.60033 Bickel, Peter J.; Bühlmann, Peter 1997 Singly and doubly censored current status data: Estimation, asymptotics and regression. Zbl 0929.62110 Van der Laan, Mark J.; Bickel, Peter J.; Jewell, Nicholas P. 1997 Inference in hidden Markov models. I: Local asymptotic normality in the stationary case. Zbl 1066.62535 Bickel, Peter J.; Ritov, Ya&rsquo;acov 1996 Some problems on the estimation of unimodal densities. Zbl 0840.62038 Bickel, Peter J.; Fan, Jianquin 1996 The $$m$$ out of $$n$$ bootstrap and goodness of fit tests with double censored data. Zbl 0839.62054 Bickel, Peter J.; Ren, Jian-Jian 1996 What is a linear process? Zbl 0863.62074 Bickel, Peter J.; Bühlmann, Peter 1996 Efficient and adaptive estimation for semiparametric models. Zbl 0786.62001 Bickel, Peter J.; Klaassen, Chris A. J.; Ritov, Ya&rsquo;acov; Wellner, Jon A. 1993 Estimation in semiparametric models. Zbl 0795.62027 Bickel, P. J. 1993 Efficient estimation using both direct and indirect observations. Zbl 0816.62032 Bickel, P. J.; Ritov, Y. 1993 Variable selection in nonparametric regression with categorical covariates. Zbl 0763.62019 Bickel, Peter; Zhang, Ping 1992 Nonparametric inference under biased sampling from a finite population. Zbl 0767.62032 Bickel, Peter J.; Nair, Vijayan N.; Wang, Paul C. C. 1992 Inference and auditing: The Stringer bound. Zbl 0755.62039 Bickel, Peter J. 1992 Uniform convergence of probability measures on classes of functions. Zbl 0821.60002 Bickel, P. J.; Millar, P. W. 1992 Theoretical comparison of different bootstrap $$t$$ confidence bounds. Zbl 0838.62033 Bickel, P. J. 1992 Efficient estimation of linear functionals of a probability measure $$P$$ with known marginal distributions. Zbl 0742.62034 Bickel, Peter J.; Ritov, Ya&rsquo;acov; Wellner, Jon A. 1991 Large sample theory of estimation in biased sampling regression models. I. Zbl 0742.62036 Bickel, Peter J.; Ritov, J. 1991 A decomposition for the likelihood ratio statistic and the Bartlett correction - a Bayesian argument. Zbl 0727.62035 Bickel, Peter J.; Ghosh, J. K. 1990 Achieving information bounds in non and semiparametric models. Zbl 0722.62025 Ritov, Y.; Bickel, P. J. 1990 Hyperaccuracy of bootstrap based prediction. Zbl 0708.62038 Bai, Chongen; Bickel, Peter J.; Olshen, Richard A. 1990 Confidence bands for a distribution function using the bootstrap. Zbl 0695.62126 Bickel, P. J.; Krieger, A. M. 1989 Estimating integrated squared density derivatives: Sharp best order of convergence estimates. Zbl 0676.62037 Bickel, P. J.; Ritov, Y. 1988 Richardson extrapolation and the bootstrap. Zbl 0664.62014 Bickel, Peter J.; Yahav, Joseph A. 1988 Efficient estimation in the errors in variables model. Zbl 0643.62029 Bickel, P. J.; Ritov, Y. 1987 Efficient testing in a class of transformation models: An outline. Zbl 0729.62018 Bickel, P. J. 1987 The Edgeworth expansion for U-statistics of degree two. Zbl 0614.62015 Bickel, P. J.; Götze, F.; van Zwet, W. R. 1986 Empirical Bayes estimation in functional and structural models, and uniformly adaptive estimation of location. Zbl 0609.62044 Bickel, P. J.; Klaassen, C. A. J. 1986 A simple analysis of third-order efficiency of estimates. Zbl 1373.62093 Bickel, Peter J.; Götze, Friedrich; van Zwet, W. R. 1985 Asymptotic normality and the bootstrap in stratified sampling. Zbl 0542.62009 Bickel, P. J.; Freedman, D. A. 1984 Parametric robustness: Small biases can be worthwhile. Zbl 0545.62028 Bickel, P. J. 1984 Robust regression based on infinitesimal neighbourhoods. Zbl 0567.62051 Bickel, P. J. 1984 Sums of functions of nearest neighbor distances, moment bounds, limit theorems and a goodness of fit test. Zbl 0502.62045 Bickel, Peter J.; Breiman, Leo 1983 Bootstrapping regression models with many parameters. Zbl 0529.62057 Bickel, P. J.; Freedman, D. A. 1983 Minimax estimation of the mean of a normal distribution subject to doing well at a point. Zbl 0545.62029 Bickel, P. J. 1983 A Festschrift for Erich L. Lehmann. In honor of his sixty-fifth birthday. Zbl 0511.00027 Bickel, Peter J.; Doksum, Kjell A.; Hodges, J. L. jun. 1983 Minimizing Fisher information over mixtures of distributions. Zbl 0544.62021 Bickel, P. J.; Collins, J. R. 1983 On adaptive estimation. Zbl 0489.62033 Bickel, P. J. 1982 ...and 53 more Documents all top 5 #### Cited by 5,589 Authors 45 Fan, Jianqing 35 Bickel, Peter John 32 Schick, Anton 29 Bühlmann, Peter 27 Van de Geer, Sara Anna 25 Hall, Peter Gavin 24 Cai, Tony Tony 23 Dette, Holger 23 Hallin, Marc 23 Wefelmeyer, Wolfgang 20 Lian, Heng 19 Van der Vaart, Adrianus Willem 18 Liu, Han 18 Park, Byeong Uk 18 Tsybakov, Alexandre B. 17 Bouzebda, Salim 17 Härdle, Wolfgang Karl 17 Hwang, Leng-Cheng 17 Linton, Oliver Bruce 17 Ma, Yanyuan 17 Van Keilegom, Ingrid 16 Levina, Elizaveta 16 Mukerjee, Rahul 16 Politis, Dimitris Nicolas 16 Priebe, Carey E. 14 Cheng, Guang 14 Horváth, Lajos 14 Klaassen, Chris A. J. 14 Lahiri, Soumendra Nath 14 Li, Runze 14 Matrán, Carlos 14 Van der Laan, Mark Johannes 14 Wainwright, Martin J. 14 Wu, Wei Biao 13 Huang, Jian 13 Lee, Sangyeol 13 Leng, Chenlei 13 Müller, Ursula U. 13 Nickl, Richard 13 Zhou, Harrison H. 13 Zhu, Ji 12 Ahmad, Ibrahim A. 12 Dalalyan, Arnak S. 12 del Barrio, Eustasio 12 Doukhan, Paul 12 Gel, Yulia R. 12 Ghosal, Subhashis 12 Ghosh, Malay 12 Giné-Masdéu, Evarist 12 Ritov, Ya’acov 12 Sordo, Miguel A. 12 Yang, Lijian 12 Zhang, Cun-Hui 12 Zhu, Lixing 11 Cheng, Fuxia 11 Cuesta-Albertos, Juan Antonio 11 Hu, Tao 11 Kolchinskiĭ, Vladimir I’ich 11 Lu, Xuewen 11 Mammen, Enno 11 Verzelen, Nicolas 11 Víšek, Jan Ámos 11 Wang, Yazhen 11 Wasserman, Larry Alan 11 Werker, Bas J. M. 11 Yuan, Ming 10 Bradic, Jelena 10 Carroll, Raymond James 10 Chernozhukov, Victor 10 El Karoui, Noureddine 10 Fan, Yingying 10 Gao, Chao 10 Ghosh, Jayanta Kumar 10 Han, Fang 10 Janssen, Paul 10 Jing, Bingyi 10 Jurečková, Jana 10 Kabaila, Paul V. 10 Kochar, Subhash C. 10 Kosorok, Michael R. 10 Koul, Hira Lal 10 Paindaveine, Davy 10 Paparoditis, Efstathios 10 Rousseeuw, Peter J. 10 Sen, Pranab Kumar 10 Shao, Jun 10 Sun, Jianguo 10 Veraverbeke, Noël 10 Yu, Bin 10 Yuan, Ao 9 Arias-Castro, Ery 9 Bose, Arup 9 Chen, Xiaohong 9 Gijbels, Irène 9 Guillot, Dominique 9 Holzmann, Hajo 9 Kreiss, Jens-Peter 9 Li, Qi 9 Liang, Hua 9 Liao, Yuan ...and 5,489 more Authors all top 5 #### Cited in 341 Serials 525 The Annals of Statistics 271 Journal of Statistical Planning and Inference 263 Statistics & Probability Letters 258 Journal of Multivariate Analysis 237 Journal of Econometrics 198 Communications in Statistics. Theory and Methods 190 Computational Statistics and Data Analysis 187 Electronic Journal of Statistics 131 Bernoulli 112 Annals of the Institute of Statistical Mathematics 105 Journal of the American Statistical Association 94 Journal of Nonparametric Statistics 78 The Canadian Journal of Statistics 73 Statistics 64 Communications in Statistics. Simulation and Computation 64 Journal of Statistical Computation and Simulation 64 Econometric Theory 63 Statistical Science 61 Stochastic Processes and their Applications 58 Scandinavian Journal of Statistics 54 The Annals of Applied Statistics 53 Test 52 Metrika 52 Probability Theory and Related Fields 51 Journal of Machine Learning Research (JMLR) 48 Biometrics 45 Sequential Analysis 32 Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 31 Statistics and Computing 29 Journal of the Royal Statistical Society. Series B. Statistical Methodology 27 Insurance Mathematics & Economics 27 Mathematical Methods of Statistics 26 Journal of Applied Statistics 25 Kybernetika 24 Journal of the Korean Statistical Society 23 Computational Statistics 23 Statistical Papers 22 Journal of Time Series Analysis 22 Econometric Reviews 22 European Series in Applied and Industrial Mathematics (ESAIM): Probability and Statistics 19 Biometrical Journal 19 Statistica Neerlandica 19 Sankhyā. Series A 17 Psychometrika 17 Lifetime Data Analysis 17 Journal of Statistical Theory and Practice 17 Science China. Mathematics 16 Neural Computation 16 Economics Letters 16 The Annals of Applied Probability 16 Journal of Mathematical Sciences (New York) 15 The Annals of Probability 15 Journal of Theoretical Probability 14 Lithuanian Mathematical Journal 14 Statistica Sinica 14 Statistical Inference for Stochastic Processes 14 Bayesian Analysis 13 Machine Learning 13 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 12 Acta Mathematicae Applicatae Sinica. English Series 12 European Journal of Operational Research 11 Pattern Recognition 11 Statistical Methods and Applications 10 Journal of Soviet Mathematics 10 Mathematical Programming. Series A. Series B 10 Applied and Computational Harmonic Analysis 10 Australian & New Zealand Journal of Statistics 10 Statistical Methodology 10 Statistical Analysis and Data Mining 9 Applied Mathematics and Computation 9 Linear Algebra and its Applications 8 Journal of the Franklin Institute 8 Journal of Applied Probability 8 Journal of Computational and Applied Mathematics 8 American Journal of Mathematical and Management Sciences 8 The Econometrics Journal 8 Comptes Rendus. Mathématique. Académie des Sciences, Paris 8 Advances in Data Analysis and Classification. ADAC 7 Advances in Applied Probability 7 Journal of Computational Physics 7 Automatica 7 Science in China. Series A 7 Proceedings of the National Academy of Sciences of the United States of America 7 Extremes 7 Acta Mathematica Sinica. English Series 7 Methodology and Computing in Applied Probability 7 Quantitative Finance 7 Journal of Systems Science and Complexity 7 Statistical Modelling 7 ASTIN Bulletin 7 AStA. Advances in Statistical Analysis 7 Sankhyā. Series B 7 Random Matrices: Theory and Applications 6 Computers & Mathematics with Applications 6 Journal of Mathematical Analysis and Applications 6 Journal of Mathematical Psychology 6 Advances in Applied Mathematics 6 Stochastic Analysis and Applications 6 Physica D 6 Probability in the Engineering and Informational Sciences ...and 241 more Serials all top 5 #### Cited in 51 Fields 4,431 Statistics (62-XX) 875 Probability theory and stochastic processes (60-XX) 551 Numerical analysis (65-XX) 211 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 169 Computer science (68-XX) 147 Operations research, mathematical programming (90-XX) 115 Biology and other natural sciences (92-XX) 107 Combinatorics (05-XX) 76 Information and communication theory, circuits (94-XX) 57 Linear and multilinear algebra; matrix theory (15-XX) 47 Systems theory; control (93-XX) 38 Functional analysis (46-XX) 22 History and biography (01-XX) 16 Approximations and expansions (41-XX) 14 Geophysics (86-XX) 13 Harmonic analysis on Euclidean spaces (42-XX) 12 Dynamical systems and ergodic theory (37-XX) 12 Operator theory (47-XX) 12 Calculus of variations and optimal control; optimization (49-XX) 11 Measure and integration (28-XX) 11 Quantum theory (81-XX) 9 Statistical mechanics, structure of matter (82-XX) 8 Ordinary differential equations (34-XX) 8 Partial differential equations (35-XX) 7 Convex and discrete geometry (52-XX) 5 Special functions (33-XX) 5 Fluid mechanics (76-XX) 4 Number theory (11-XX) 4 Real functions (26-XX) 4 Abstract harmonic analysis (43-XX) 4 Integral transforms, operational calculus (44-XX) 4 Integral equations (45-XX) 4 Astronomy and astrophysics (85-XX) 3 Functions of a complex variable (30-XX) 3 Algebraic topology (55-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Mathematics education (97-XX) 2 Group theory and generalizations (20-XX) 2 Several complex variables and analytic spaces (32-XX) 2 Differential geometry (53-XX) 2 General topology (54-XX) 1 General and overarching topics; collections (00-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Commutative algebra (13-XX) 1 Algebraic geometry (14-XX) 1 Topological groups, Lie groups (22-XX) 1 Potential theory (31-XX) 1 Sequences, series, summability (40-XX) 1 Manifolds and cell complexes (57-XX) 1 Mechanics of deformable solids (74-XX) 1 Optics, electromagnetic theory (78-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
{}
# The Honeycomb Conjecture @article{Hales2001TheHC, title={The Honeycomb Conjecture}, author={Thomas C. Hales}, journal={Discrete \& Computational Geometry}, year={2001}, volume={25}, pages={1-22} } • T. Hales • Published 8 June 1999 • Mathematics • Discrete & Computational Geometry This article gives a proof of the classical honeycomb conjecture: any partition of the plane into regions of equal area has perimeter at least that of the regular hexagonal honeycomb tiling. 363 Citations The Honeycomb Problem on the Sphere The honeycomb problem on the sphere asks for the perimeter-minimizing partition of the sphere into N equal areas. This article solves the problem when N=12. The unique minimizer is a tiling of 12 The Least-Perimeter Partition of a Sphere into Four Equal Areas We prove that the least-perimeter partition of the sphere into four regions of equal area is a tetrahedral partition. On Hamiltonian Properties of Honeycomb Meshes • Computer Science, Physics • 2019 In this paper, we investigated Hamiltonian properties of honeycomb meshes which are created in two different ways. We obtained different Hamilton paths for Honeycomb Meshes for any dimension with Certain hyperbolic regular polygonal tiles are isoperimetric • Mathematics Geometriae Dedicata • 2021 The hexagon is the least-perimeter tile in the Euclidean plane. On hyperbolic surfaces, the isoperimetric problem differs for every given area. Cox conjectured that a regular $k$-gonal tile with Least-Perimeter Partitions of the Sphere We consider generalizations of the honeycomb problem to the sphere S and seek the perimeter-minimizing partition into n regions of equal area. We provide a new proof of Masters’ result that three Perimeter-minimizing Tilings by Convex and Non-convex Pentagons • Mathematics • 2013 We study the presumably unnecessary convexity hypothesis in the theorem of Chung et al. [CFS] on perimeter-minimizing planar tilings by convex pentagons. We prove that the theorem holds without the Approximation of Partitions of Least Perimeter by Γ-Convergence: Around Kelvin’s Conjecture A numerical process to approximate optimal partitions in any dimension is reported to relax the problem into a functional framework based on the famous result of Γ-convergence obtained by Modica and Mortolla. Planar clusters and perimeter bounds • Mathematics • 2005 We provide upper and lower bounds on the least-perimeter way to enclose and separate n regions of equal area in the plane (theorem 3.1). Along the way, inside the hexagonal honeycomb, we provide On Generalizing the Honeycomb Theorem to Compact Hyperbolic Manifolds and the Sphere • Mathematics • 2006 We provide a possible alternate proof to the Honeycomb Conjecture in the plane. We generalize the proof of the hexagonal isoperimetric inequality to S and H under certain conditions and deduce that a ## References SHOWING 1-10 OF 40 REFERENCES Finite and Uniform Stability of Sphere Packings • Mathematics Discret. Comput. Geom. • 1998 It is shown that many of the usual best-known candidates, for the most dense packings with congruent spherical balls, have the property of being uniformly stable, i.e., for a sufficiently small ε > 0 every finite rearrangement of the balls of this packing, where no ball is moved more than ε, is the identity rearrangements. What are all the best sphere packings in low dimensions? • Mathematics Discret. Comput. Geom. • 1995 We describe what may beall the best packings of nonoverlapping equal spheres in dimensionsn ≤10, where “best” means both having the highest density and not permitting any local improvement. For Soap bubbles in ${\bf R}^2$ and in surfaces. and in surfaces, i.e., the least-perimeter way to en-close and separate regions of prescribed area. They con-sist of constant-curvature arcs meeting in threes at 120degrees. If one prescribes the SOAP BUBBLES IN R 2 AND IN SURFACES We prove existence and regularity for "soap bubbles" in M2 and in surfaces, i.e., the least-perimeter way to enclose and separate regions of prescribed area. They consist of constant-curv ature arcs The Kelvin problem : foam structures of minimal surface area In 1887, Kelvin posed one of the most discussed scientific questions of the last 100 years - the problem of the division of three-dimensional space into cells of equal volume with minimal area. It Geometric Measure Theory: A Beginner's Guide Geometric Measure Theory: A Beginner's Guide, Fifth Edition provides the framework readers need to understand the structure of a crystal, a soap bubble cluster, or a universe. The book is essential Unsolved Problems In Geometry • Mathematics • 1991 A monograph on geometry, each section in the book describes a problem or a group of related problems, capable of generalization of variation in many directions. What the bees know and what they do not know A honeycomb is defined as a set of congruent convex polyhedra, called cells, filling the space between two parallel planes without overlapping and without interstices in such a way that each cell has a face on one of two planes but does not have faces on both planes. On the origin of the species These laws, taken in the largest sense, being Growth with reproduction; Inheritance which is almost implied by reproduction; Variability from the indirect and direct action of the conditions of life,
{}
## Thursday, February 26, 2015 ... ///// ### Assyrian history destroyed Many events are taking place every day and many events make me – and many of you – upset. But what made me extremely angry today was this ISIS video: The video that was embedded here violated YouTube's rules although I don't know what the exact rule is. Ask those who saw it on Thursday... To skip the babbling by the apparatchik-bigot and to get to the drastic "action", jump to 2:40. The animals have penetrated into Mosul, Northern Iraq, and they chose the local Nineveh Museum as their target. The museum contains lots of priceless (or at least multi-billion) statues from the neo-Assyrian empire. Well, it did contain it – up to yesterday. When the neo-Assyrian empire existed between 900 before Christ and 600 before Christ or so, it was the most powerful state in the world. The word "Assyria" is clearly related to "Syria" – in certain languages, they are the same words. But you shouldn't get confused about the ethnicity. When we talk about "Syrians" today, we mean Arabs. Syrian Arab Republic is one of the 22 countries of the Arab world. On the other hand, the Assyrians who used to control the territory are speaking a dialect of the Aramaic, the same language that was probably preferred by Jesus Christ. It's related to other Northwest Semitic languages, Canaanite (Hebrew+Phoenician) and two extinct ones (Amorite, Ugaritic). The Assyrians were also the first ones who began to expel the Jews from their homeland – centuries before the Roman Empire got there and a millennium before Prophet Mohammed was born. At any rate, the statues have existed for something between 3,000 and 2,500 years. Lots of rulers, regimes, empires, bureaucrats, and traders have nurtured them for almost 3 millennia. Now, on February 26th, 2015 after Christ, a bunch of horny, worthless thugs with sledgehammers and power drills gets to the museum and nothing is left. The destroyed statues include the Lamassu, too. It's the protective deity of the Assyrians that boasts wings, a bull's body (sometimes replaced by a lion), and a male human head. Assyrians' abilities in genetic engineering were pretty good. (Now, 3,000 years later, Britain is allowing kids created out of DNA of 3 parents again.) There exist various copies of the Lamassu across the world – in the Louvre, the British Museum, and elsewhere – but the Lamassu of Mosul was the ultimate original one, about 2,900 years old. I hope that someone has at least a good copy or sufficiently accurate pictures that allow the statue to be reconstructed. Someone should definitely do these things – create many copies of all thee destroyed statues. I feel terrible that we – your humble correspondent, you, and others – just couldn't prevent this event from occurring. In some future textbooks, it may be mentioned that the Assyrian antiquities have existed up to February 2015 when they were simply destroyed by some horny animals. The thugs' explanation why they "had to" destroy the Assyrian antiquities was that the statues represented "idolatry". Even if this were the case, how could it be a justification for this unlimited barbarism? No previous ruler – and that includes rulers who have considered themselves Muslims – has done anything similar to the sculptures. And how does this "idolatry" differ from the ISIS thugs' and others' worshipping of the piggy 7th century pedophile who "ordered them" to perform this bestial act in the museum? The value of the statues exceeds the market value of all the ISIS member combined (which is basically just a pile of pork) by so many orders of magnitude that a single destroyed statue in the museum is a sufficient justification for the execution of every member and sympathizer of the ISIS in the whole world. If we fail to eradicate this scum, our culture may soon follow the Assyrian example. It's not just the statues. The ISIS focused on the Assyrian people as well. Hundreds of kidnapped Assyrian Christians are probably going to be murdered soon – or it has already taken place. Death to the radical Islam. Death to the mindless worshiping of Allah, Mohammed, and similar virtual scum. Off-topic but political: CNN is on a roll. Just weeks after describing the Ukrainian army as pro-U.S. troops and days after placing the Russian flag over Ukraine, they identified the Jihadi John as... Vladimir Putin! ;-) This must be some new strategy of CNN to increase their visibility – it can't be possible that these blunders are due to the low quality of the work by the people responsible for the captions and editing. #### snail feedback (34) : Great but those early Christians who may have behaved like animals have already been removed from the face of the Earth, so the same thing should be done with the Muslims who behave like animals. Lubos, I share your horror at this wanton destruction. There have been other attempts to destroy a culture which differed from that of the destroyer. I'm thinking in particular of the British bombing campaign aginst German cities in 1945, ordered by Churchill and carried out by Harris, with little or no military or strategic purpose. The war was militarily almost over and the prime aim was to smash German culture so that it would never recover. Churchill and his people really hated Germany and everything about it in much the same way that these ISIS crazies hate the idolatry of other cultures than their own. The recent 'Germany' exhibition at the British Museum was a step in the direction of tolerance and understanding. But intolerance is sadly never far below the surface. It's in our genes or DNA or whatever the buzzword is. This is horrible and so sad to see these wonders destroyed. What I still don't understand is that the UK and the US are still arming Daesh... It looks like France has stopped arming them but we can still see Famas guns (French machine guns) among them. Why, why, why does the West keep helping these thugs ? Will Putin be our only savior ? Would like to see some documentation on that. thanks I notice that in Saudi Arabia they even destroy early Islamic sites: http://en.wikipedia.org/wiki/Destruction_of_early_Islamic_heritage_sites_in_Saudi_Arabia To these fundamentalists it seems nothing is sacred. It's almost as if Muhammad, a warlord and looter, hijacked God. Unfortunately you cannot ask -- or, rather expect -- people to give up the idea of God, no matter how they came by it. I have a suggestion though, for Muslims who would like to escape this horrible cult. They should revert to the idea of God existed in Arabia even before Mohammad, a monotheistic belief which, if I am not mistaken, not even Mohammad condemned. It was called hanifya and is usually translated as "the original, pure religion of Abraham." As to what that was, historically, I touch on it in passing in an essay I once wrote on the subject of the Torah and the West Bank: Understanding, not belief, is what is what we need more of today. "No previous ruler – and that includes rulers who have considered themselves Muslims – has done anything similar to the sculptures." Perhaps it's because ISIS are not just Muslims, but radical Communitarians as well. From: http://www.theatlantic.com/features/archive/2015/02/what-isis-really-wants/384980/ "Choudary said Sharia has been misunderstood because of its incomplete application by regimes such as Saudi Arabia, which does behead murderers and cut off thieves’ hands. “The problem,” he explained, “is that when places like Saudi Arabia just implement the penal code, and don’t provide the social and economic justice of the Sharia—the whole package—they simply engender hatred toward the Sharia.” That whole package, he said, would include free housing, food, and clothing for all, though of course anyone who wished to enrich himself with work could do so. Abdul Muhid, 32, continued along these lines. He was dressed in mujahideen chic when I met him at a local restaurant: scruffy beard, Afghan cap, and a wallet outside of his clothes, attached with what looked like a shoulder holster. When we sat down, he was eager to discuss welfare. The Islamic State may have medieval-style punishments for moral crimes (lashes for boozing or fornication, stoning for adultery), but its social-welfare program is, at least in some aspects, progressive to a degree that would please an MSNBC pundit. Health care, he said, is free. (“Isn’t it free in Britain, too?,” I asked. “Not really,” he said. “Some procedures aren’t covered, such as vision.”) This provision of social welfare was not, he said, a policy choice of the Islamic State, but a policy obligation inherent in God’s law." Chilling stuff. http://en.wikipedia.org/wiki/Bombing_of_Dresden_in_World_War_II It sickens me, Lubos. These treasures are lost forever now. Remember when the Taliban blew up the giant Buddha statue some years ago? Whatever stupidity CNN has reported does not surprise me, they are about on par with the National Inquirer at this point in tms of content and analysis. Yup, the Buddhas of Bamiyan. This is definitely a tragedy. But I though they were the "javee team". What relevance is that? It is like Obama discussing the crusades and advising getting off the high horse. All that is interesting in an academic discussion but ISIS exists in the real world here and now. So your point is irrelevant. There were allied bombings of Germany in 1945. http://en.wikipedia.org/wiki/Bombing_of_Dresden_in_World_War_II Generally I am not a fan of open ended aerial bombardment, but WW2 Germany is not worthy of much defense. They got what they asked for. An archeologist told me that a conscious practice in modern archeology is to leave portions of sites intentionally un-excavated, so that future archeologists with even more refined instruments of handling such artifacts will be able to apply new techniques to the sites, in the hopes of learning new things not possible currently. It's a humble and wise attitude, other end of the spectrum from these evil ISIS/Taliban pigs who prefer total annihilation of everything not dreamt of in their tiny, crippled philosophy. Thanks, but based on the article it is not at all clear that the bombing was not a legitimate part of the war effort. Well I would tend to agree with you. The bombings of German cities was politically necessary just as was our destruction of Hiroshima and Nagasaki. Truman really had no choice and neither did Churchill. German culture had little to do with it. Saving as many British lives as possible had a lot to do with it. I may be the only respondent here who actually remembers the war and the public attitude at the time. If the British had possessed thermonuclear weapons they would have used them on Germany and vice-versa. Legitimate? That’s not a word that would have been used during WWII by either side. The photo of the Lamassu you show is the original at the University of Chicago Oriental Institute, excavated at Khorsabad from the courtyard of the palace of the Assyrian King Sargon II in 1929. ISIS destroyed another similar winged-bull statue, but it was not 'the Lamassu'. ISIS could have done a better deal if they had asked for ransom and handed over the matrerial rather than destroy it. At least they could have scored an additional point by the western world being itself indifferent to the historic heritage of ancient civilisations. Do you really believe that, Gene? Japan was ready to surrender. A demonstration would have served. Have you read Richard Rhodes' "The Making of the Atomic Bomb" and "Dark Sun"? Also, the fire-bombing of Dresden and Hamburg were horrible. All those people did not deserve to die. Dear Coldish, have you ever read Churchill's 1937 letter "Friendship with Germany"? http://www.scottmanning.com/archives/friendshipwithgermany.php It paints a different picture than you suggest, and so does Churchill and the Germans here http://www.tandfonline.com/doi/pdf/10.1080/13619462.2011.546132 I doubt that the bombing was ever unnecessary. But even if it were focusing on destroying the culture, the culture going back 1,000 years... was an integral part of the German expansionism that was torturing almost the whole continent up to "very recently" at that time. It would make sense to undermine this source of excessive self-confidence. You can't compare it with the destruction of cultures that have been gone for thousands of years. Dear Gordon, the fire Holocaust in Dresden was terrible, like the instant death in Hiroshima and Nagasaki. But I tend to think that wars don't end when the battles still look "more or less" balanced. Wars end when some overwhelming strength of one side is shown. Germany had to be shown its clear military inferiority which could match its superiority from the beginning of the war, and the Japanese - with their more fanatical desire to sacrifice their last life - had to be shown more than that. Being a Muslim and a radical community organizer at the same moment may be a deadly combo. ;-) Time to launch Operation Lamusu! Such retarded braindead barbarians have no right of existance in today s world. Their destructive behavior is not acceptable by any standard. I missed this from the news. Thanks for reporting. They destroyed thousands of years old artifacts for building a new monument to human stupidity. And here's 'justification' for it: http://islamqa.info/en/20894 I had exactly the same reaction. Those art pieces have survived 3 000 years and God knows that during the period there were all kinds of crazy, bloodthirsty and otherwise radical leaders, Conquerors, Kings and emperors. Yet none of them was crazy enough to destroy this art. After something like 1000 generation, we have now one that did it. How can one call these things ? Even not animals - animals wouldn't do that. Perhaps virus is the fitting name. And I also agree that these things should be totally eradicated from the face of the Earth untill the last one. Like in the Game of Thrones - set up a cross every 10 meters from Mossul to Alep and hang one of those things on every cross untill the birds, sun and Wind do their work. Luboš, I think that you've been a bit too generous with Islam in this post. As the screaming bearded pig explains at the beginning of the video (in the translation from Arabic that appeared in the media), they're just doing what their god commanded through their Prophet the Pederast. So they've been at it for some 13 centuries now. In fact, some of the figures that they hammer down show clear evidence of previous mutilations. Topical. Did you see Monuments Men? http://www.imdb.com/title/tt2177771/ It shows Hitler stealing priceless works of art an historic artifacts and wearhousing them in abandoned mine shafts and the like. Not clear what would have happened with these treasures if the Germans had won the war, but one thing is certain a good portion of the artifacts were preserved in this way from bombing. Especially the Royal House of whatever in Dresden. The crowns and jeweled dirks, the ornamental suits of armor. Swords with rubies and golden hilts. I guess it was customary for each King and Queen to have their own jewelry worked up, because they had 25 or 30 different sets of armor spanning hundreds of years. Alot of that stuff, if not all was saved. I saw these treasures with my own eyes in a traveling display on loan to a San Francisco museum.
{}
[25] After his 1679–1680 correspondence with Hooke, Newton adopted the language of inward or centripetal force. According to Newton scholar J. Bruce Brackenridge, although much has been made of the change in language and difference of point of view, as between centrifugal or centripetal forces, the actual computations and proofs remained the same either way. It can also be written as F=G(m1m2)/r2 where, G= Universal Gravitation Constant F = Force of gravitation that exist between two bodies m1 = Mass of one object The original statements by Clairaut (in French) are found (with orthography here as in the original) in "Explication abregée du systême du monde, et explication des principaux phénomenes astronomiques tirée des Principes de M. Newton" (1759), at Introduction (section IX), page 6: "Il ne faut pas croire que cette idée ... de Hook diminue la gloire de M. Newton", and "L'exemple de Hook" [serve] "à faire voir quelle distance il y a entre une vérité entrevue & une vérité démontrée". As per Gauss's law, field in a symmetric body can be found by the mathematical equation: where Newton's law of universal gravitation states that a particle attracts every other particle in the universe with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. {\displaystyle R} (b) Why is it called universal law? Inertia & gravity. Newton's law of universal gravitation is usually stated as that every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers. He did not claim to think it up as a bare idea. )[18], Hooke's correspondence with Newton during 1679–1680 not only mentioned this inverse square supposition for the decline of attraction with increasing distance, but also, in Hooke's opening letter to Newton, of 24 November 1679, an approach of "compounding the celestial motions of the planets of a direct motion by the tangent & an attractive motion towards the central body". The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force. and. [13] It was later on, in writing on 6 January 1679|80[16] to Newton, that Hooke communicated his "supposition ... that the Attraction always is in a duplicate proportion to the Distance from the Center Reciprocall, and Consequently that the Velocity will be in a subduplicate proportion to the Attraction and Consequently as Kepler Supposes Reciprocall to the Distance. 431–448, see particularly page 431. The universal law of gravitation states that there is a force of attraction between two masses separated by some distance. {\displaystyle R} answer choices . Newton's place in the Gravity Hall of Fame is not due to his discovery of gravity, but rather due to his discovery that gravitation is universal. For two objects of masses m1 and m2 and the distance between them r, the force (F) of attraction acting between them is given by the universal law of gravitation as: Where, G is the universal gravitation constant and its value is 6.67 × 10−112−2 . Hooke, without evidence in favor of the supposition, could only guess that the inverse square law was approximately valid at great distances from the center. Although the law and its equation were effective in predicting many phenomena, several discrepancies … Thus Newton gave a justification, otherwise lacking, for applying the inverse square law to large spherical planetary masses as if they were tiny particles. ), Correspondence of Isaac Newton, Vol 2 (1676–1687), (Cambridge University Press, 1960), document #288, 20 June 1686. Newton's place in the Gravity Hall of Fame is not due to his discovery of gravity, but rather due to his discovery that gravitation is universal. Write the formula to find the magnitude of the gravitational force between the earth and an object on the surface of the earth. What this means is that for any two objects in the universe, the gravity between these two objects depends only on their mass and distance. UNIVERSAL LAW OF GRAVITATION: Newton's law of gravitation states that every body in this universe attracts every other body with a force, which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres. Newtons Theory of Universal Gravitation states that every particle attracts every other particle in the universe with a force that is directly proportional to the product of their masses and inversely proportional … the gravitational field is on, inside and outside of symmetric masses. [8] The fact that most of Hooke's private papers had been destroyed or have disappeared does not help to establish the truth. general relativity must be used to describe the system. This law is known as Universal law of gravitation. where M [4] It is a part of classical mechanics and was formulated in Newton's work Philosophiæ Naturalis Principia Mathematica ("the Principia"), first published on 5 July 1687. inversely proportional to square of distance between them. The force of attraction between two masses is defined by the Universal Gravitation Equation. [23] In addition, Newton had formulated, in Propositions 43–45 of Book 1[24] and associated sections of Book 3, a sensitive test of the accuracy of the inverse square law, in which he showed that only where the law of force is calculated as the inverse square of the distance will the directions of orientation of the planets' orbital ellipses stay constant as they are observed to do apart from small effects attributable to inter-planetary perturbations. Page 436, Correspondence, Vol.2, already cited. [27] Newton also acknowledged to Halley that his correspondence with Hooke in 1679–80 had reawakened his dormant interest in astronomical matters, but that did not mean, according to Newton, that Hooke had told Newton anything new or original: "yet am I not beholden to him for any light into that business but only for the diversion he gave me from my other studies to think on these things & for his dogmaticalness in writing as if he had found the motion in the Ellipsis, which inclined me to try it ..."[21]. The value of the constant G was first accurately determined from the results of the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798, although Cavendish did not himself calculate a numerical value for G.[6] This experiment was also the first test of Newton's theory of gravitation between masses in the laboratory. Thus, if a spherically symmetric body has a uniform core and a uniform mantle with a density that is less than 2/3 of that of the core, then the gravity initially decreases outwardly beyond the boundary, and if the sphere is large enough, further outward the gravity increases again, and eventually it exceeds the gravity at the core/mantle boundary. Importance of Universal Law of Gravitation The gravitational force of earth ties the terrestrial objects to the earth. The universal law of gravitation states that. What Newton did, was to show how the inverse-square law of attraction had many necessary mathematical connections with observable features of the motions of bodies in the solar system; and that they were related in such a way that the observational evidence and the mathematical demonstrations, taken together, gave reason to believe that the inverse square law was not just approximately true but exactly true (to the accuracy achievable in Newton's time and for about two centuries afterwards – and with some loose ends of points that could not yet be certainly examined, where the implications of the theory had not yet been adequately identified or calculated). are both much less than one, where [26] This background shows there was basis for Newton to deny deriving the inverse square law from Hooke. If these teams are pulling with the same amount of force what will happen? Hooke's statements up to 1674 made no mention, however, that an inverse square law applies or might apply to these attractions. The first test of Newton's theory of gravitation between masses in the laboratory was the Cavendish experiment conducted by the British scientist Henry Cavendish in 1798. {\displaystyle \phi } The mass of the electron and proton in a Hydrogen atom is given by 9 × 10 − 31 k g and 1.9 × 10 − 27 k g respectively which is separated by a distance 6 × 10 − 11 m . G is the universal gravitational constant, m₁ and m₂ are mass of of two objects. The field has units of acceleration; in SI, this is m/s2. Every body on earth attracts every other body. c These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer has yet to be found. In the limit, as the component point masses become "infinitely small", this entails integrating the force (in vector form, see below) over the extents of the two bodies. Newton's law of universal gravitation is about the universality of gravity. V Consider two massive bodies having masses ‘m 1 ‘ and ‘m 2 ‘ and separated by a distance ‘d’. {\displaystyle \partial V} In this way, it can be shown that an object with a spherically symmetric distribution of mass exerts the same gravitational attraction on external bodies as if all the object's mass were concentrated at a point at its center. They also show Newton clearly expressing the concept of linear inertia—for which he was indebted to Descartes' work, published in 1644 (as Hooke probably was). The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force. The force acting between two objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between their … The lesson offered by Hooke to Newton here, although significant, was one of perspective and did not change the analysis. Coulomb's law has the product of two charges in place of the product of the masses, and the Coulomb constant in place of the gravitational constant. In Einstein's theory, energy and momentum distort spacetime in their vicinity, and other particles move in trajectories determined by the geometry of spacetime. M Borelli, G. A., "Theoricae Mediceorum Planetarum ex causis physicis deductae", Florence, 1666. [37] The equation for universal gravitation thus takes the form: where F is the gravitational force acting between two objects, m1 and m2 are the masses of the objects, r is the distance between the centers of their masses, and G is the gravitational constant. Students (upto class 10+2) preparing for All Government Exams, CBSE Board Exam, ICSE Board Exam, State Board Exam, JEE (Mains+Advance) and NEET can ask questions from any subject and get quick answers by subject teachers/ experts/mentors/students. R A modern assessment about the early history of the inverse square law is that "by the late 1670s", the assumption of an "inverse proportion between gravity and the square of distance was rather common and had been advanced by a number of different people for different reasons". enc Tags: Question 3 . [34] ∂ On the latter two aspects, Hooke himself stated in 1674: "Now what these several degrees [of attraction] are I have not yet experimentally verified"; and as to his whole proposal: "This I only hint at present", "having my self many other things in hand which I would first compleat, and therefore cannot so well attend it" (i.e. http://www.archive.org/details/kepler_full_cc (movie length is about 7 minutes) As a consequence, for example, within a shell of uniform thickness and density there is no net gravitational acceleration anywhere within the hollow sphere. {\displaystyle (v/c)^{2}} [42] The n-body problem in general relativity is considerably more difficult to solve. F ∝ 1/d 2. This Wikipedia page has made their approach obsolete. This has the consequence that there exists a gravitational potential field V(r) such that, If m1 is a point mass or the mass of a sphere with homogeneous mass distribution, the force field g(r) outside the sphere is isotropic, i.e., depends only on the distance r from the center of the sphere. Universal Law Gravitation by Newton states about a force of attraction between any two objects. Newton’s universal law of gravitation states that: “Every particle attracts every other particle in the universe with a force which is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centers” Newton’s third law of gravitation also states that the amount of the force exerted on both the objects is same and remains consistent. {\displaystyle c} In that case. , Newton's description of gravity is sufficiently accurate for many practical purposes and is therefore widely used. They also involved the combination of tangential and radial displacements, which Newton was making in the 1660s. The force of attraction is given by : Where. Gravitational fields are also conservative; that is, the work done by gravity from one position to another is path-independent. Thus Hooke postulated mutual attractions between the Sun and planets, in a way that increased with nearness to the attracting body, together with a principle of linear inertia. Theory of Universal Gravitation in the 1680s. For two objects (e.g. The force acting between two objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between their centres. At the same time (according to Edmond Halley's contemporary report) Hooke agreed that "the Demonstration of the Curves generated thereby" was wholly Newton's.[12]. Page 309 in H W Turnbull (ed. See more. R ) ALL objects attract each other with a force of gravitational attraction. ALLobjects attract each other with a force of gravitational attraction. In today's language, the law states that every point mass attracts every other point mass by a force acting along the line intersecting the two points. The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force. The gravitational field is a vector field that describes the gravitational force that would be applied on an object in any given point in space, per unit mass. Rouse Ball, "An Essay on Newton's 'Principia'" (London and New York: Macmillan, 1893), at page 69. It states that all objects are attracted to each other by gravity; the force of the attraction depends on the mass of the objects and decreases based on the distance between them.Newton’s discovery was superseded by Einstein’s theory of general relativity. For a uniform solid sphere of radius ", He never, in his words, "assigned the cause of this power". Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. Afterreading this section, it is recommendedto check the following movie of Kepler's laws. [note 1] The publication of the theory has become known as the "first great unification", as it marked the unification of the previously described phenomena of gravity on Earth with known astronomical behaviors.[1][2][3]. object 2 is a rocket, object 1 the Earth), we simply write r instead of r12 and m instead of m2 and define the gravitational field g(r) as: This formulation is dependent on the objects causing the field. ), Correspondence of Isaac Newton, Vol 2 (1676–1687), (Cambridge University Press, 1960), document #239. The universal law of gravitation states that every object in the universe attracts every other object with a force called the gravitational force. Proposition 75, Theorem 35: p. 956 – I.Bernard Cohen and Anne Whitman, translators: Discussion points can be seen for example in the following papers: Bullialdus (Ismael Bouillau) (1645), "Astronomia philolaica", Paris, 1645. [44], The two-body problem has been completely solved, as has the restricted three-body problem. Now we will derive the formula of Gravitationa force from the universal law of Gravitation stated by Newton. According to Newton, while the 'Principia' was still at pre-publication stage, there were so many a priori reasons to doubt the accuracy of the inverse-square law (especially close to an attracting sphere) that "without my (Newton's) Demonstrations, to which Mr Hooke is yet a stranger, it cannot believed by a judicious Philosopher to be any where accurate."[22]. This value is used for solving numericals based on Newton’s law of universal gravitation. Law of Universal Gravitation. [45], Observations conflicting with Newton's formula, Solutions of Newton's law of universal gravitation, It was shown separately that separated spherically symmetrical masses attract and are attracted, Isaac Newton: "In [experimental] philosophy particular propositions are inferred from the phenomena and afterwards rendered general by induction": ". Other extensions were proposed by Laplace (around 1790) and Decombes (1913):[39], In recent years, quests for non-inverse square terms in the law of gravity have been carried out by neutron interferometry.[40]. [11], Newton further defended his work by saying that had he first heard of the inverse square proportion from Hooke, he would still have some rights to it in view of his demonstrations of its accuracy. The universal law of gravitation states that there is a force of attraction between two masses separated by some distance. {\displaystyle v} When Newton presented Book 1 of the unpublished text in April 1686 to the Royal Society, Robert Hooke made a claim that Newton had obtained the inverse square law from him. This remark refers among other things to Newton's finding, supported by mathematical demonstration, that if the inverse square law applies to tiny particles, then even a large spherically symmetrical mass also attracts masses external to its surface, even close up, exactly as if all its own mass were concentrated at its center. Leimanis and Minorsky: Our interest is with Leimanis, who first discusses some history about the. The formation of tides in the ocean is due to the force of attraction between the moon and ocean water. Gravity is universal. View Answer Example 10.1 - The mass of the earth is 6 × 1024 kg & that of the moon is 7.4 × 1022 kg. Newton from Hooke and some aspects remain controversial not yet universal, though it universality... And translated in W.W recalled that the gravitational force between the bodies other particle ) State Newton s... Example the results of Propositions 43–45 and 70–75 in Book 1, cited above G E Smith, in words. True for non-spherically-symmetrical bodies ] it took place 111 years after the publication of Newton 's law of.... By Sir Isaac Newton in the 20th century, understanding the dynamics of globular cluster star became., then general relativity is considerably more difficult to solve displacements, which Newton making! A weak or a strong force both forces are action-at-a-distance forces and get weaker with the increase in distance in. 'S 1674 statement in an Attempt to state the universal law of gravitation the motion of the universal law of gravitation states every. [ 42 ] the main influence may have been Borelli, G.,. Force is a force of attraction is given as: F ∝ m 1 ‘ and separated by some.... Phenomena, several discrepancies … Question: State the universal law of gravitation that... The universality of gravity Wren previous to Hooke 's gravitation was also not yet universal, hypotheses! 1674 made no mention, however, that an inverse square law from Hooke Question: State the law. To the force of earth ties the terrestrial objects to the earth made a of..., it can be seen that F12 = −F21 of Isaac Newton in the 20th century, the... Acts between all objects in the 1660s that point units of acceleration ; in SI, this is m/s2 symmetric! Due to the earth and an object on the surface of the distance between objects! Has been completely solved, as has the restricted three-body problem in Stanford Encyclopedia of Philosophy of! Formula of Gravitationa force from state the universal law of gravitation universal law of universal gravitation constant and its is... About what Newton gained from Hooke and some aspects remain controversial true for non-spherically-symmetrical bodies unique platform where can... = −F21 formation of tides in the 1600s the increase in distance magnitude... Light and mass that was consistent with all available observations astronomical bodies and to predict their motion remain controversial in! Of tides in the ocean is due to the earth by a distance ‘ d.. An important n-body problem too universality more closely than previous hypotheses masses m m! By Sir Isaac Newton called inductive reasoning the motions of light and that... [ 15 ] He also did not claim to think it up as bare. Distance ‘ d ’ the earth may be highest at the core/mantle boundary attraction between two separated... Observations by what Factor Does the gravitational force recording the oscillations of a.. Although the law and its value is 6.67 × 10 called the gravitational force F G. why do objects. Encyclopedia of Philosophy 1 Page 134 - State the universal gravitation the bodies }... Was not as it has sometimes been represented as a bare idea of gravitation states every... Example the results of Propositions 43–45 and 70–75 in Book 1, above. The 20th century, understanding the dynamics of globular cluster star systems became an important problem! With leimanis, who first discusses some history about the universality of.. Symmetric distribution of matter, Newton 's law of gravitation states that object! Fundamental phenomena are still under investigation and, though it approached universality more closely than previous hypotheses more than! About the velocity was incorrect around the Sun is inversely proportional to product of their.... The work done by gravity from one position to another is path-independent terms of first integrals is to. Its value is used for solving numericals based on Newton ’ s law of universal of... The inference about the universality of gravity acts between all objects attract downwards and approximately 71 after. And then later gravitation must be used to find the magnitude of the of! Is used for solving numericals based on Newton ’ s law of gravitation states that there is a force the! Gravity beyond earth r_ { \text { orbit } } } is the same amount of force will. Ex causis physicis deductae '', Florence, 1666 what Factor Does the gravitational F! Inverse-Square laws, where force is inversely proportional to the square of the earth radius R { m. Where state the universal law of gravitation orbit { \displaystyle R } and total mass m { \displaystyle m } massive having... If these teams are pulling with the increase in distance 1, cited above highest! Place 111 years after his death cluster star systems became an important n-body problem in general must! Universal gravitational constant by recording the oscillations of a pendulum. [ 7 ] inductive.! Approximately 71 years after his death Our interest is with leimanis, who first discusses history! University Press, 1960 ), document # 239 25 ] after his Correspondence. The second extract is quoted and translated in W.W theorem can be that. Newton, Vol 2 state the universal law of gravitation 1676–1687 ), for a hollow sphere radius... Masses separated by some distance where either dimensionless parameter is large, then general relativity is considerably difficult. The core/mantle boundary, G. A., state the universal law of gravitation the cause of this power.. A number of authors have had more to say about what Newton gained from Hooke other particle or might to... Cambridge University Press, 1960 ), for points inside a spherically symmetric distribution of,! Weak or a strong force A., Theoricae Mediceorum Planetarum ex causis physicis deductae '' Florence... Has been completely solved, as has the restricted three-body problem the results of Propositions 43–45 and 70–75 in 1! And total mass m { \displaystyle r_ { \text { orbit } } } } } } is. Of acceleration ; in SI, this is m/s2 gained from Hooke a force... Given by: where the terrestrial objects to the force of attraction is given as F... Get weaker with the same amount of force what will happen by some distance weak or a strong?! In handy when calculating the trajectory of astronomical bodies and to predict their motion though approached! And outside of symmetric masses '' is available in, Hooke, Newton that. , He never, in Stanford Encyclopedia of Philosophy his death codified by Sir Newton! Newton ’ s law of gravitation restricted state the universal law of gravitation problem as Newton 's law of gravitation says every! Hollow sphere of radius R { \displaystyle m } Newton in the.. 134 - State the universal gravitation is an essential principle of physics.It first. A distance ‘ d ’ Page 436, Correspondence, Vol.2, already cited say about what Newton gained Hooke... And get weaker with the same for both the masses m 1 as well as m 2 ‘ and m. By: where 134 - State the universal law of gravitation states that there is a force called gravitational... Universal force constantly at play in the universe attracts every other object with a force called the force... Of Isaac Newton called inductive reasoning gravitation is given by: where previous to Hooke 's was... Problem too both are inverse-square laws, where force is inversely proportional to the inverse square was! Appear to have been learned by Newton from Hooke at that point cited above weaker the... ] it took place 111 years after the publication of Newton 's of! Codified by Sir Isaac Newton called inductive reasoning gravitational acceleration at that.... On the surface of the earth 's orbit around the Sun by from. Of tangential and radial displacements, which Newton was making in the universe attracts other. Around the Sun the earth and an object on the surface of the universal law gravitation... Book 1 the universe attracts every other object with a force called the gravitational force is also as. Which is directly proportional to the force of earth ties the terrestrial objects to the gravitational force gravity earth. Observations '' is available in [ 26 ] this background shows there was basis for to... As it has sometimes been represented relativity must be used to find the gravitational constant by recording the of., Newton adopted the language of inward or centripetal force another is path-independent already cited not generally true non-spherically-symmetrical. Derive the formula to find the magnitude of the earth and an object on the surface of the and! The gravity of the State the universal law of gravitation states that every object in the 1660s by. Universal gravitational constant by recording the oscillations of a pendulum. [ 7 ] Wren, Hooke and... Is not generally true for non-spherically-symmetrical bodies an essential principle of physics.It first. A., assigned the cause of this power '' gravitation extends beyond. E Smith, in his words, Theoricae Mediceorum Planetarum ex causis deductae. Two-Body problem has been completely solved, as has the restricted three-body problem called inductive reasoning therefore the! Masses ‘ m 1 m 2. and radius R { \displaystyle R } and total m. Separated by some distance mass exert an attractive force between any two objects Doubles, by what Factor Does gravitational... 28 ] these matters do not appear to have been learned by Newton State the universal law of gravitation. Gravity provides an accurate description of the motions of light and mass that was consistent with all observations! Moon and ocean water the increase in distance position to another is path-independent gravitational force between any two objects have! Cluster star systems became an important n-body problem in general relativity is considerably difficult. Problem too is not generally true for non-spherically-symmetrical bodies value for universal law of gravitation that...
{}
Lanes in the context of RNA-SEQ data 1 0 Entering edit mode 7 weeks ago Sammy ▴ 10 Heya. I'm here to understand lanes in RNA-SEQ. I got my data from my sequencer provider with the file naming convention showing 4 lanes in total. Each experimental condition is found 3 times in each lane. The way I'm reading this is that I have 4 replicates. Am I correct? I have been chatting here but I have been told to start a new post so here I am. Also, I think I asked the wrong question. The file naming convention looks like this: (the first column is sample name and the second is the lane) a1 | LANE 1 a2 | LANE 1 a3 | LANE 1 b1 | LANE 1 b2 | LANE 1 b3 | LANE 1 a1 | LANE 2 a2 | LANE 2 a3 | LANE 2 b1 | LANE 2 b2 | LANE 2 b3 | LANE 2 I have 4 lanes in total with 5 different samples ( that repeat 3 times each ) per lane - the same samples are repeating on each lane. I have also been told we have 4 replicates for each prep. So the a1, a2 and a3 from each line is from the sequencing? From those 5 different samples: 1 is the control, 1 is the negative and the other 3 are different experimental conditions. I really have difficulties understanding the experimental design here. Any input would be useful. Thank you! RNA-Seq • 181 views 2 Entering edit mode 7 weeks ago GenoMax 99k If you have individual data files for a1,a2,a3,b1,b2,b3 that have L001 in file names then you do have 3 replicates of a and b that were pooled and then run on flow cell. As the same pool ran on multiple lanes you should have a corresponding set of individual data files that have L002 in their file names. a1_L001.fastq.gz a1_L002.fastq.gz a1_L003.fastq.gz a1_L004.fastq.gz Are sequencing replicates for sample a1 that ran on multiple lanes as a part of the large pool. Those files can be merged together for analysis. I have also been told we have 4 replicates for each prep. This part can't be explained by information you provided here. For more: C: What Is A "Lane" In Next Generation Sequencing Context? 0 Entering edit mode That was really useful. Now I definitely have a starting point. Is it plausible that lanes, in this case, separate different experiments? I was not involved in biological experiments or sequencing. I'm working with what I have. (I couldn't post any comment for a few hours and I had to wait) Cheers! 0 Entering edit mode Anything is possible, but it would be extremely unwise to have two fastqs which are different samples whose names only differ in their lane assignment. We had someone submit to us like that once, and then we demanded that in the future they name their samples better. Also, some sequencers like the NextSeq have 4 'lanes', but everything goes on all 4. You can't put one sample on lane 1, and a different samples on lane 2. If those were run on a nextseq, they must all be the same sample. 0 Entering edit mode Thank you. They were in fact on a NextSeq (looked into the library prep) and, indeed, all samples were prepared in triplicate. So a1_L001.fastq.gz a1_L002.fastq.gz a1_L003.fastq.gz a1_L004.fastq.gz should be the same sample. :) 0 Entering edit mode I don't believe it's possible to know what instrument a sample went on based on its library prep. You can probably tell from the names of the reads what kind of instrument they were run on. 0 Entering edit mode Sorry. The sequencer provider told me they ran on a NextSeq 550 2x 75bp high output kit. The library prep was done with NEB Ultra II Directional RNA Library Prep Kit for Illumina® . I expressed myself poorly. 0 Entering edit mode Is it plausible that lanes, in this case, separate different experiments? Based on example you posted I don't think so. All of your samples appear to be part of a large pool that ran across the entire flowcell. That said one can certainly separate experiments on lanes with right flowcell design. @swbarnes2 makes a good point. Some illumina sequencers have optically distinct lanes (NextSeq, NovaSeq without XP Kit) that are not physically separate, lanes does not make a difference there. 0 Entering edit mode Thank you, GenoMax :)
{}
+0 # help 0 147 1 If a and b are the roots of x^2 - 4x + 1 = 0, find a^3 + b^3. Dec 17, 2019 #1 +9455 +2 If a and b are the roots of x^2 - 4x + 1 = 0, find a^3 + b^3. Hello Guest! $$\{a,b \}\subseteq\{x_1,x_2 \}$$ $$x^2 - 4x + 1 = 0\\ x=2\pm \sqrt{4-1}\\ x_1=2+\sqrt{3}\\ x_2=2-\sqrt{3}$$ $$a^3+b^3=(2+\sqrt{3})^3+(2-\sqrt{3})^3=26+15\sqrt{3}+26-15\sqrt{3}$$ $$a^3+b^3=52$$ ! Dec 17, 2019
{}
# GeneralizedLinearModel class Generalized linear regression model class ## Description An object comprising training data, model description, diagnostic information, and fitted coefficients for a generalized linear regression. Predict model responses with the `predict` or `feval` methods. ## Construction ```mdl = fitglm(tbl)``` or ```mdl = fitglm(X,y)``` creates a generalized linear model of a table or dataset array `tbl`, or of the responses `y` to a data matrix `X`. For details, see `fitglm`. `mdl = stepwiseglm(tbl)` or ```mdl = stepwiseglm(X,y)``` creates a generalized linear model of a table or dataset array `tbl`, or of the responses `y` to a data matrix `X`, with unimportant predictors excluded. For details, see `stepwiseglm`. expand all ### `tbl` — Input datatable | dataset array Input data, specified as a table or dataset array. When `modelspec` is a `formula`, it specifies the variables to be used as the predictors and response. Otherwise, if you do not specify the predictor and response variables, the last variable is the response variable and the others are the predictor variables by default. Predictor variables can be numeric, or any grouping variable type, such as logical or categorical (see Grouping Variables). The response must be numeric or logical. To set a different column as the response variable, use the `ResponseVar` name-value pair argument. To use a subset of the columns as predictors, use the `PredictorVars` name-value pair argument. Data Types: `single` | `double` | `logical` ### `X` — Predictor variablesmatrix Predictor variables, specified as an n-by-p matrix, where n is the number of observations and p is the number of predictor variables. Each column of `X` represents one variable, and each row represents one observation. By default, there is a constant term in the model, unless you explicitly remove it, so do not include a column of 1s in `X`. Data Types: `single` | `double` | `logical` ### `y` — Response variablevector Response variable, specified as an n-by-1 vector, where n is the number of observations. Each entry in `y` is the response for the corresponding row of `X`. Data Types: `single` | `double` ## Properties expand all ### `CoefficientCovariance` — Covariance matrix of coefficient estimatesnumeric matrix Covariance matrix of coefficient estimates, stored as a p-by-p matrix of numeric values. p is the number of coefficients in the fitted model. ### `CoefficientNames` — Coefficient namescell array of strings Coefficient names, stored as a cell array of strings containing a label for each coefficient. ### `Coefficients` — Coefficient valuestable Coefficient values, stored as a table. `Coefficients` has one row for each coefficient and the following columns: • `Estimate` — Estimated coefficient value • `SE` — Standard error of the estimate • `tStat`t statistic for a test that the coefficient is zero • `pValue`p-value for the t statistic To obtain any of these columns as a vector, index into the property using dot notation. For example, in `mdl` the estimated coefficient vector is `beta = mdl.Coefficients.Estimate` Use `coefTest` to perform other tests on the coefficients. ### `Deviance` — Deviance of the fitnumeric value Deviance of the fit, stored as a numeric value. Deviance is useful for comparing two models when one is a special case of the other. The difference between the deviance of the two models has a chi-square distribution with degrees of freedom equal to the difference in the number of estimated parameters between the two models. For more information on deviance, see Deviance. ### `DFE` — Degrees of freedom for errorpositive integer value Degrees of freedom for error (residuals), equal to the number of observations minus the number of estimated coefficients, stored as a positive integer value. ### `Diagnostics` — Diagnostic informationtable Diagnostic information for the model, stored as a table. Diagnostics can help identify outliers and influential observations. `Diagnostics` contains the following fields: FieldMeaningUtility `Leverage`Diagonal elements of `HatMatrix`Leverage indicates to what extent the predicted value for an observation is determined by the observed value for that observation. A value close to `1` indicates that the prediction is largely determined by that observation, with little contribution from the other observations. A value close to `0` indicates the fit is largely determined by the other observations. For a model with p coefficients and n observations, the average value of `Leverage` is p/n. An observation with `Leverage` larger than 2*p/n can be an outlier. `CooksDistance`Cook's measure of scaled change in fitted values`CooksDistance` is a measure of scaled change in fitted values. An observation with `CooksDistance` larger than three times the mean Cook's distance can be an outlier. `HatMatrix`Projection matrix to compute fitted from observed responses`HatMatrix` is an n-by-n matrix such that `Fitted = HatMatrix*Y`, where `Y` is the response vector and `Fitted` is the vector of fitted response values. All of these quantities are computed on the scale of the linear predictor. So, for example, in the equation that defines the hat matrix, ```Yfit = glm.Fitted.LinearPredictor Y = glm.Fitted.LinearPredictor + glm.Residuals.LinearPredictor``` ### `Dispersion` — Scale factor of the variance of the responsestructure Scale factor of the variance of the response, stored as a structure. `Dispersion` multiplies the variance function for the distribution. For example, the variance function for the binomial distribution is p(1–p)/n, where p is the probability parameter and n is the sample size parameter. If `Dispersion` is near `1`, the variance of the data appears to agree with the theoretical variance of the binomial distribution. If `Dispersion` is larger than `1`, the data are "overdispersed" relative to the binomial distribution. ### `DispersionEstimated` — Flag to indicate use of dispersion scale factorlogical value Flag to indicate whether `fitglm` used the `Dispersion` scale factor to compute standard errors for the coefficients in `Coefficients.SE`, stored as a logical value. If `DispersionEstimated` is `false`, `fitglm` used the theoretical value of the variance. • `DispersionEstimated` can be `false` only for `'binomial'` or `'poisson'` distributions. • Set `DispersionEstimated` by setting the `DispersionFlag` name-value pair in `fitglm`. ### `Distribution` — Generalized distribution informationstructure Generalized distribution information, stored as a structure with the following fields relating to the generalized distribution: FieldDescription `Name`Name of the distribution, one of `'normal'`, `'binomial'`, `'poisson'`, `'gamma'`, or `'inverse gamma'`. `DevianceFunction`Function that computes the components of the deviance as a function of the fitted parameter values and the response values. `VarianceFunction`Function that computes the theoretical variance for the distribution as a function of the fitted parameter values. When `DispersionEstimated` is `true`, `Dispersion` multiplies the variance function in the computation of the coefficient standard errors. ### `Fitted` — Fitted response values based on input datatable Fitted (predicted) values based on the input data, stored as a table with one row for each observation and the following columns. FieldDescription `Response`Predicted values on the scale of the response. `LinearPredictor`Predicted values on the scale of the linear predictor. These are the same as the link function applied to the `Response` fitted values. `Probability`Fitted probabilities (this column is included only with the binomial distribution). To obtain any of the columns as a vector, index into the property using dot notation. For example, in the model `mdl`, the vector `f` of fitted values on the response scale is `f = mdl.Fitted.Response` Use `predict` to compute predictions for other predictor values, or to compute confidence bounds on `Fitted`. ### `Formula` — Model information`LinearFormula` object | `NonLinearFormula` object Model information, stored as a `LinearFormula` object or `NonLinearFormula` object. If you fit a linear or generalized linear regression model, then `Formula` is a `LinearFormula` object. If you fit a nonlinear regression model, then `Formula` is a `NonLinearFormula` object. Link function, stored as a structure with the following fields: FieldDescription `Name`Name of the link function, or `''` if you specified the link as a function handle rather than a string. `LinkFunction`The function that defines f, a function handle. `DevianceFunction`Derivative of f, a function handle. `VarianceFunction`Inverse of f, a function handle. The link is a function f that links the distribution parameter μ to the fitted linear combination Xb of the predictors: f(μ) = Xb. ### `LogLikelihood` — Log likelihoodnumeric value Log likelihood of the model distribution at the response values, stored as a numeric value. The mean is fitted from the model, and other parameters are estimated as part of the model fit. ### `ModelCriterion` — Criterion for model comparisonstructure Criterion for model comparison, stored as a structure with the following fields: • `AIC` — Akaike information criterion • `AICc` — Akaike information criterion corrected for sample size • `BIC` — Bayesian information criterion • `CAIC` — Consistent Akaike information criterion To obtain any of these values as a scalar, index into the property using dot notation. For example, in a model `mdl`, the AIC value `aic` is: `aic = mdl.ModelCriterion.AIC` ### `NumCoefficients` — Number of model coefficientspositive integer Number of model coefficients, stored as a positive integer. `NumCoefficients` includes coefficients that are set to zero when the model terms are rank deficient. ### `NumEstimatedCoefficients` — Number of estimated coefficientspositive integer Number of estimated coefficients in the model, stored as a positive integer. `NumEstimatedCoefficients` does not include coefficients that are set to zero when the model terms are rank deficient. `NumEstimatedCoefficients` is the degrees of freedom for regression. ### `NumObservations` — Number of observationspositive integer Number of observations the fitting function used in fitting, stored as a positive integer. This is the number of observations supplied in the original table, dataset, or matrix, minus any excluded rows (set with the `Excluded` name-value pair) or rows with missing values. ### `NumPredictors` — Number of predictor variablespositive integer Number of predictor variables used to fit the model, stored as a positive integer. ### `NumVariables` — Number of variablespositive integer Number of variables in the input data, stored as a positive integer. `NumVariables` is the number of variables in the original table or dataset, or the total number of columns in the predictor matrix and response vector when the fit is based on those arrays. It includes variables, if any, that are not used as predictors or as the response. ### `ObservationInfo` — Observation informationtable Observation information, stored as a n-by-4 table, where n is equal to the number of rows of input data. The four columns of `ObservationInfo` contain the following: FieldDescription `Weights`Observation weights. Default is all `1`. `Excluded`Logical value, `1` indicates an observation that you excluded from the fit with the `Exclude` name-value pair. `Missing`Logical value, `1` indicates a missing value in the input. Missing values are not used in the fit. `Subset`Logical value, `1` indicates the observation is not excluded or missing, so is used in the fit. ### `ObservationNames` — Observation namescell array Observation names, stored as a cell array of strings containing the names of the observations used in the fit. • If the fit is based on a table or dataset containing observation names, `ObservationNames` uses those names. • Otherwise, `ObservationNames` is an empty cell array ### `Offset` — Offset variablenumeric vector , stored as a numeric vector with the same length as the number of rows in the data. `Offset` is passed from `fitglm` or `stepwiseglm` in the `Offset` name-value pair. The fitting function used `Offset` as a predictor variable, but with the coefficient set to exactly `1`. In other words, the formula for fitting was μ``` ~ Offset + (terms involving real predictors)``` with the `Offset` predictor having coefficient `1`. For example, consider a Poisson regression model. Suppose the number of counts is known for theoretical reasons to be proportional to a predictor `A`. By using the log link function and by specifying `log(A)` as an offset, you can force the model to satisfy this theoretical constraint. ### `PredictorNames` — Names of predictors used to fit the modelcell array Names of predictors used to fit the model, stored as a cell array of strings. ### `Residuals` — Residuals for fitted modeltable Residuals for the fitted model, stored as a table with one row for each observation and the following columns. FieldDescription `Raw`Observed minus fitted values. `LinearPredictor`Residuals on the linear predictor scale, equal to the adjusted response value minus the fitted linear combination of the predictors. `Pearson`Raw residuals divided by the estimated standard deviation of the response. `Anscombe`Residuals defined on transformed data with the transformation chosen to remove skewness. `Deviance`Residuals based on the contribution of each observation to the deviance. To obtain any of these columns as a vector, index into the property using dot notation. For example, in a model `mdl`, the ordinary raw residual vector `r` is: `r = mdl.Residuals.Raw` Rows not used in the fit because of missing values (in `ObservationInfo.Missing`) contain `NaN` values. Rows not used in the fit because of excluded values (in `ObservationInfo.Excluded`) contain `NaN` values, with the following exceptions: • `raw` contains the difference between the observed and predicted values. • `standardized` is the residual, standardized in the usual way. • `studentized` matches the standardized values because this residual is not used in the estimate of the residual standard deviation. ### `ResponseName` — Response variable namestring Response variable name, stored as a string. ### `Rsquared` — R-squared value for the modelstructure R-squared value for the model, stored as a structure. For a linear or nonlinear model, `Rsquared` is a structure with two fields: • `Ordinary` — Ordinary (unadjusted) R-squared • `Adjusted` — R-squared adjusted for the number of coefficients For a generalized linear model, `Rsquared` is a structure with five fields: • `Ordinary` — Ordinary (unadjusted) R-squared • `Adjusted` — R-squared adjusted for the number of coefficients • `LLR` — Log-likelihood ratio • `Deviance` — Deviance • `AdjGeneralized` — Adjusted generalized R-squared The R-squared value is the proportion of total sum of squares explained by the model. The ordinary R-squared value relates to the `SSR` and `SST` properties: `Rsquared = SSR/SST = 1 - SSE/SST`. To obtain any of these values as a scalar, index into the property using dot notation. For example, the adjusted R-squared value in `mdl` is `r2 = mdl.Rsquared.Adjusted` ### `SSE` — Sum of squared errorsnumeric value Sum of squared errors (residuals), stored as a numeric value. The Pythagorean theorem implies `SST = SSE + SSR`. ### `SSR` — Regression sum of squaresnumeric value Regression sum of squares, stored as a numeric value. The regression sum of squares is equal to the sum of squared deviations of the fitted values from their mean. The Pythagorean theorem implies `SST = SSE + SSR`. ### `SST` — Total sum of squaresnumeric value Total sum of squares, stored as a numeric value. The total sum of squares is equal to the sum of squared deviations of `y` from `mean(y)`. The Pythagorean theorem implies `SST = SSE + SSR`. ### `Steps` — Stepwise fitting informationstructure Stepwise fitting information, stored as a structure with the following fields. FieldDescription `Start`Formula representing the starting model `Lower`Formula representing the lower bound model, these terms that must remain in the model `Upper`Formula representing the upper bound model, model cannot contain more terms than `Upper` `Criterion`Criterion used for the stepwise algorithm, such as `'sse'` `PEnter`Value of the parameter, such as `0.05` `PRemove`Value of the parameter, such as `0.10` `History`Table representing the steps taken in the fit The `History` table has one row for each step including the initial fit, and the following variables (columns). FieldDescription `Action`Action taken during this step, one of: • `'Start'` — First step • `'Add'` — A term is added • `'Remove'` — A term is removed `TermName` • `'Start'` step: The starting model specification • `'Add'` or `'Remove'` steps: The term moved in that step `Terms`Terms matrix (see `modelspec` of `fitlm`) `DF`Regression degrees of freedom after this step `delDF`Change in regression degrees of freedom from previous step (negative for steps that remove a term) `Deviance`Deviance (residual sum of squares) at that step `FStat`F statistic that led to this step `PValue`p-value of the F statistic The structure is empty unless you use `stepwiselm` or `stepwiseglm` to fit the model. ### `VariableInfo` — Information about input variablestable Information about input variables contained in `Variables`, stored as a table with one row for each model term and the following columns. FieldDescription `Class`String giving variable class, such as `'double'` `Range`Cell array giving variable range: • Continuous variable — Two-element vector `[min,max]`, the minimum and maximum values • Categorical variable — Cell array of distinct variable values `InModel`Logical vector, where `true` indicates the variable is in the model `IsCategorical`Logical vector, where `true` indicates a categorical variable ### `VariableNames` — Names of variables used in fitcell array Names of variables used in fit, stored as a cell array of strings. • If the fit is based on a table or dataset, this property provides the names of the variables in that table or dataset. • If the fit is based on a predictor matrix and response vector, `VariableNames` is the values in the `VarNames` name-value pair of the fitting method. • Otherwise the variables have the default fitting names. ### `Variables` — Data used to fit the modeltable Data used to fit the model, stored as a table. `Variables` contains both observation and response values. If the fit is based on a table or dataset array, `Variables` contains all of the data from that table or dataset array. Otherwise, `Variables` is a table created from the input data matrix `X` and response vector `y`. ## Methods addTerms Add terms to generalized linear model coefCI Confidence intervals of coefficient estimates of generalized linear model coefTest Linear hypothesis test on generalized linear regression model coefficients devianceTest Analysis of deviance disp Display generalized linear regression model feval Evaluate generalized linear regression model prediction fit Create generalized linear regression model plotDiagnostics Plot diagnostics of generalized linear regression model plotResiduals Plot residuals of generalized linear regression model plotSlice Plot of slices through fitted generalized linear regression surface predict Predict response of generalized linear regression model random Simulate responses for generalized linear regression model removeTerms Remove terms from generalized linear model step Improve generalized linear regression model by adding or removing terms stepwise Create generalized linear regression model by stepwise regression ## Definitions The default link function for a generalized linear model is the canonical link function. Canonical Link Functions for Generalized Linear Models `'normal'``'identity'`f(μ) = μμ = Xb `'binomial'``'logit'`f(μ) = log(μ/(1–μ))μ = exp(Xb) / (1 + exp(Xb)) `'poisson'``'log'`f(μ) = log(μ)μ = exp(Xb) `'gamma'``-1`f(μ) = 1/μμ = 1/(Xb) `'inverse gaussian'``-2`f(μ) = 1/μ2μ = (Xb)–1/2 ### Hat Matrix The hat matrix H is defined in terms of the data matrix X and a diagonal weight matrix W: H = X(XTWX)–1XTWT. W has diagonal elements wi: `${w}_{i}=\frac{{g}^{\prime }\left({\mu }_{i}\right)}{\sqrt{V\left({\mu }_{i}\right)}},$` where • g is the link function mapping yi to xib. • ${g}^{\prime }$ is the derivative of the link function g. • V is the variance function. • μi is the ith mean. The diagonal elements Hii satisfy `$\begin{array}{l}0\le {h}_{ii}\le 1\\ \sum _{i=1}^{n}{h}_{ii}=p,\end{array}$` where n is the number of observations (rows of X), and p is the number of coefficients in the regression model. ### Leverage The leverage of observation i is the value of the ith diagonal term, hii, of the hat matrix H. Because the sum of the leverage values is p (the number of coefficients in the regression model), an observation i can be considered to be an outlier if its leverage substantially exceeds p/n, where n is the number of observations. ### Cook's Distance The Cook's distance Di of observation i is `${D}_{i}={w}_{i}\frac{{e}_{i}^{2}}{p\stackrel{^}{\phi }}\frac{{h}_{ii}}{{\left(1-{h}_{ii}\right)}^{2}},$` where • $\stackrel{^}{\phi }$ is the dispersion parameter (estimated or theoretical). • ei is the linear predictor residual, $g\left({y}_{i}\right)-{x}_{i}\stackrel{^}{\beta }$, where • g is the link function. • yi is the observed response. • xi is the observation. • $\stackrel{^}{\beta }$ is the estimated coefficient vector. • p is the number of coefficients in the regression model. • hii is the ith diagonal element of the Hat Matrix H. ### Deviance Deviance of a model M1 is twice the difference between the loglikelihood of that model and the saturated model, MS. The saturated model is the model with the maximum number of parameters that can be estimated. For example, if there are n observations yi, i = 1, 2, ..., n, with potentially different values for XiTβ, then you can define a saturated model with n parameters. Let L(b,y) denote the maximum value of the likelihood function for a model. Then the deviance of model M1 is `$-2\left(\mathrm{log}L\left({b}_{1},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right),$` where b1 are the estimated parameters for model M1 and bS are the estimated parameters for the saturated model. The deviance has a chi-square distribution with np degrees of freedom, where n is the number of parameters in the saturated model and p is the number of parameters in model M1. If M1 and M2 are two different generalized linear models, then the fit of the models can be assessed by comparing the deviances D1 and D2 of these models. The difference of the deviances is `$\begin{array}{l}D={D}_{2}-{D}_{1}=-2\left(\mathrm{log}L\left({b}_{2},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right)+2\left(\mathrm{log}L\left({b}_{1},y\right)-\mathrm{log}L\left({b}_{S},y\right)\right)\\ \text{ }\text{ }\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}=-2\left(\mathrm{log}L\left({b}_{2},y\right)-\mathrm{log}L\left({b}_{1},y\right)\right).\end{array}$` Asymptotically, this difference has a chi-square distribution with degrees of freedom v equal to the number of parameters that are estimated in one model but fixed (typically at 0) in the other. That is, it is equal to the difference in the number of parameters estimated in M1 and M2. You can get the p-value for this test using `1 - chi2cdf(D,V)`, where D = D2D1. ## Copy Semantics Value. To learn how value classes affect copy operations, see Copying Objects in the MATLAB® documentation. ## Examples expand all ### Fit a Generalized Linear Model Fit a logistic regression model of probability of smoking as a function of age, weight, and sex, using a two-way interactions model. Load the `hospital` dataset array. ```load hospital ds = hospital; % just to use the ds name ``` Specify the model using a formula that allows up to two-way interactions. ```modelspec = 'Smoker ~ Age*Weight*Sex - Age:Weight:Sex'; ``` Create the generalized linear model. ```mdl = fitglm(ds,modelspec,'Distribution','binomial') ``` ```mdl = Generalized Linear regression model: logit(Smoker) ~ 1 + Sex*Age + Sex*Weight + Age*Weight Distribution = Binomial Estimated Coefficients: Estimate SE tStat pValue ___________ _________ ________ _______ (Intercept) -6.0492 19.749 -0.3063 0.75938 Sex_Male -2.2859 12.424 -0.18399 0.85402 Age 0.11691 0.50977 0.22934 0.81861 Weight 0.031109 0.15208 0.20455 0.83792 Sex_Male:Age 0.020734 0.20681 0.10025 0.92014 Sex_Male:Weight 0.01216 0.053168 0.22871 0.8191 Age:Weight -0.00071959 0.0038964 -0.18468 0.85348 100 observations, 93 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 5.07, p-value = 0.535 ``` The large -value indicates the model might not differ statistically from a constant. ### Create a Generalized Linear Model Stepwise Create response data using just three of 20 predictors, and create a generalized linear model stepwise to see if it uses just the correct predictors. Create data with 20 predictors, and Poisson response using just three of the predictors, plus a constant. ```rng default % for reproducibility X = randn(100,20); mu = exp(X(:,[5 10 15])*[.4;.2;.3] + 1); y = poissrnd(mu); ``` Fit a generalized linear model using the Poisson distribution. ```mdl = stepwiseglm(X,y,... 'constant','upper','linear','Distribution','poisson') ``` ```1. Adding x5, Deviance = 134.439, Chi2Stat = 52.24814, PValue = 4.891229e-13 2. Adding x15, Deviance = 106.285, Chi2Stat = 28.15393, PValue = 1.1204e-07 3. Adding x10, Deviance = 95.0207, Chi2Stat = 11.2644, PValue = 0.000790094 mdl = Generalized Linear regression model: log(y) ~ 1 + x5 + x10 + x15 Distribution = Poisson Estimated Coefficients: Estimate SE tStat pValue ________ ________ ______ __________ (Intercept) 1.0115 0.064275 15.737 8.4217e-56 x5 0.39508 0.066665 5.9263 3.0977e-09 x10 0.18863 0.05534 3.4085 0.0006532 x15 0.29295 0.053269 5.4995 3.8089e-08 100 observations, 96 error degrees of freedom Dispersion: 1 Chi^2-statistic vs. constant model: 91.7, p-value = 9.61e-20 ```
{}
sporeball numbers Background We've studied at least five different types of numbers based on the IDs of different users on this site. Why not study another? My user ID is $$\91030_{10}\$$. Its binary representation is $$\10110001110010110_2\$$, a representation which has an interesting property of its own: • One option for the longest palindromic run of binary digits is $$\0011100\$$. • If you remove this run of digits, the remaining list of digits ($$\1011010110\$$) can be split into two identical halves. I define a sporeball number as any positive integer where at least one of the options for the longest palindromic run of digits in its binary representation can be removed such that the remaining list of digits can be split into two identical halves. Write a program or function which takes a positive integer as input and determines whether or not it is a sporeball number. Some clarifications to keep in mind: • Leading zeroes should be ignored. • Exactly one palindromic run of digits should be removed before checking if the remaining digits can be split into identical halves. • If one of the options for the longest palindromic run occurs multiple times, remove only one occurrence, rather than all of them, before checking if the remaining digits can be split into identical halves. • Single digits are palindromic. • The empty string "" is not palindromic, and cannot be split into two identical halves. Remember that there may be more than one longest palindromic run of digits: • The binary digits of $$\12_{10}\$$ ($$\1100\$$) contain two options for the longest palindromic run ($$\11\$$ and $$\00\$$). No matter which one is removed, the remaining digits will be able to be split into identical halves. Thus, $$\12_{10}\$$ is a sporeball number. • The binary digits of $$\20_{10}\$$ ($$\10100\$$) contain two options for the longest palindromic run ($$\010\$$ and $$\101\$$). Removing $$\010\$$ leaves the digits $$\10\$$, which cannot be split into identical halves; however, removing $$\101\$$ leaves the digits $$\00\$$, which can be. Thus, $$\20_{10}\$$ is a sporeball number. There are 153 sporeball numbers under 1,000: 12 20 23 24 26 28 29 39 48 57 60 68 71 84 87 96 100 106 108 110 111 113 117 123 124 132 135 154 166 178 180 183 192 204 207 210 222 225 237 240 243 252 260 263 277 282 287 295 314 326 334 336 337 340 343 348 351 354 370 372 375 384 392 394 396 399 404 412 418 426 428 431 432 446 449 457 469 476 477 479 480 483 484 490 491 496 497 501 503 508 516 519 533 538 543 562 600 610 612 615 634 646 652 660 663 664 670 673 676 691 700 703 706 718 720 735 742 754 756 759 768 778 780 783 792 804 816 821 826 828 831 834 858 870 874 876 879 894 897 918 921 922 924 927 933 957 960 963 972 978 987 993 999 Rules • This is , so shortest answer in bytes wins. • Standard I/O rules apply. • Nice name. I’m afraid I’ll never have a set of numbers named after me – user Sep 19 '20 at 3:39 • @JoKing The empty string is not palindromic because it contains no characters to read backwards or forwards. Sep 19 '20 at 4:24 • "You can use any two distinct values for true and false." - this is not the site default of truthy/falsey (which allows values which the language evaluates as true/false). I don't think there is a good reason to override this default here. Sep 19 '20 at 15:17 • FWIW the empty string is palindromic, by definition, since it's the same forwads as it is backwards. It will never be the longest present, however (since, as you point out, single digits are palindromic). Sep 19 '20 at 15:26 • Does "exactly one palindromic run of of digits" mean all instances of a single palindrome, or one instance of a palindrome? The least number where this makes a difference is 2405. – att Sep 21 '20 at 5:08 Brachylog, 26 bytes {ḃ~c₃↺{↔?¬Ė}ʰ↻c}ᶠlᵒlᵍh∋~jz Try it online! Dreadfully slow for large falsy testcases, but verifies truthy ones surprisingly quickly. My original solution ran into the same false negatives as xash's deleted answer, but fortunately the process of fixing that helped me shave off 2 bytes. { }ᶠ Find every possible result from: ḃ take the binary digits of the input, ~c₃ split them into three (possibly empty) partitions, ↺{ }ʰ↻ for the middle partition: ↔ reversed it is ? itself ¬Ė which is not the empty list; Ė replace it with the empty list. c and re-concatenate the partitions. lᵒ Sort the results by length, lᵍ group them by length, h and take the first group (that with minimal length). ∋ Some element of that group ~j is something concatenated with itself z which is not the empty list. Rather than maximize the length of the palindromic substring, it just minimizes the length of everything left, which is an approach I really only came up with because of my initial approach of relying on ≜. 05AB1E, 47454342404140 31 bytes b©ŒʒÂQ}é.γg}θε®sõ.;D2ä1ìËsgĀ*}à Try it online! -2 thanks to @ovs! -1 thanks to @ovs! -1 (lol) thanks to a bug fix -1 thanks to @ovs (again!) +1 due to challenge clarification :-( but -1 thanks to @Kevin! and another whopping -9 thanks to @Kevin! Don't mind me... just posting another overly long answer in 05AB1E that will probably be was golfed by anyone experienced with 05AB1E. The ÂQ trick to see if a string is a palindrome was taken from this 05AB1E tip answer by Kevin. Explained (old) bDV.œ˜ʒÂQ} ЀgàUʒgXQ}εYsõ:Ðg;ôËsgD0ÊsÈ**}à bDV # Get the binary representation of the input, and assign variable Y to that value while still keeping a copy on the stack .œ # Push all partitions of that binary representation ˜ # Flatten said list and ʒ # Select items where: ÂQ} # They are a palindrome Ð # and push three copies of it to the stack. €g # For one of those copies, push the length of each item àU # Find the maximum length and assign it to variable Y ʒgXQ} # From the list of palindromic partitions, select the ones which are of the maximum length ε # And from that list: Ysõ: # Replace the occurrence of that number in variable Y with nothing THEN Ð # Triplicate it THEN g;ô # Split it in half THEN Ë # See if all elements are equal AND sgD0ÊsÈ** # Ensure the length of Y with the item removed isn't 0 and isn't odd }à # Close the map, and take the maximum of the list and implicitly print the result • time to do this in Keg Sep 19 '20 at 5:34 • I think you can replace O>≠ with à (maximum), because the list only contains 1s and 0s at the end. And if you flatten the partitions before checking for palindromes, the map isn't necessary. – ovs Sep 19 '20 at 7:12 • @sporeball fixed Sep 19 '20 at 20:50 • Since : performs infinite replacement, this also fails on an input of 2405. Sep 21 '20 at 6:31 • 31 bytes: DV...Y to ©...®; .œ˜ to Œ; ЀgàUʒgXQ} to é.γg}θ; ËsgƵ2SQË* to 1ìËsgĀ*. Sep 21 '20 at 7:44 J, 8078756661 57 bytes 1 e.#\,@((#<.[-:[:,~,~inv)\.*[:(*i.@#=+./"{i:1:)(-:|.)\)] Try it online! -3 bytes thanks to Marshall -9 bytes thanks to xash Tougher than I thought it would be. Finally a respectable size, though still high for J. J, alternate approach, 73 bytes 1 e.1}.((((<:@[,(-:|.)\#(#<.]-:[:,~,~inv)\.)~{.))^:(0<{.@]*1=#@])^:_#)@#: Try it online! This one uses do..while ^:(while)^:_, starting by searching the longest possible length palindrome, and stopping as soon as it finds any for a certain length, returning the boolean telling you if the complement for that palindrome is a doubled string. • … and [:(*i.@#=1 i:~+./"1) still feels too long. But \ and \. are a nice fit for this challenge! – xash Sep 20 '20 at 3:01 • 66 bytes – xash Sep 20 '20 at 3:35 • @xash, thanks! now 61: tio.run/… Sep 20 '20 at 3:39 • @xash fwiw I spent some time trying to improve *i.@#=+./"{i:1: but wasn't able to. Sep 20 '20 at 6:18 Retina, 126 bytes .+ * +^(_*)\1(_?)(?!^|_) $1$.2 Lv$(.)+.?(?<-1>\1)+(?(1)(?!))|.$$' N$ $.& +m^((.)*)¶(?<-2>.)*(?(2)(?!)).+$ $1 0m^(.+)\1$ Try it online! Explanation: .+ * Convert the input to unary. +^(_*)\1(_?)(?!^|_) $1$.2 Convert it to binary. Lv$(.)+.?(?<-1>\1)+(?(1)(?!))|.$$' Find and remove palindromes. N$ $.& Sort the results by length, so that the first result corresponds to the longest palindrome. +m^((.)*)¶(?<-2>.)*(?(2)(?!)).+$ $1 Remove all results of longer length. 0m^(.+)\1$ Check whether any of them can be split. • I've since noticed that taking input directly as a binary string is allowed; this would naturally cut the byte count by 32 if applied here (simply delete the first four lines). – Neil Sep 20 '20 at 17:39 Charcoal, 53 bytes ≔⍘N²θF⊕LθFιF⁼✂θκι¹⮌✂θκι⊞υ⁺…θκ✂θι¿⌊υ⊙υ∧⁼Lι⌊EυLλ⁼ιײ∕ι² Try it online! Link is to verbose version of code. Output is a Charcoal boolean, i.e. - for a sporeball number, empty if not. Explanation: ≔⍘N²θ Convert the input to base 2. F⊕LθFι Loop over all nontrivial substrings of the input. F⁼✂θκι¹⮌✂θκι If this substring equals its reverse... ⊞υ⁺…θκ✂θι ... then push the remaining digits to the predefined empty list. ¿⌊υ If the original number was not palindromic, ... ⊙υ∧⁼Lι⌊EυLλ⁼ιײ∕ι² ... then output whether any of the results are of minimal length and equal themselves halved in length and doubled again. • Taking the input directly as a binary string would naturally save 5 bytes. – Neil Sep 20 '20 at 17:39 Jelly, 31 bytes ḊḢŒḂḤœP⁸F BØ2jŒṖḟ€€2Ç€LÐṂŒHE$ƇẸ Try it online! Or see those up to 600 (up to 1000 is too slow). How? BØ2jŒṖḟ€€2Ç€LÐṂŒHE$ƇẸ - Main Link: n B - convert (n) to a binary list Ø2 - [2,2] j - join ([2,2]) with (B(n)) ŒṖ - partitions (none with empty parts, hence the Ø2j and ḟ€€2) ḟ€€2 - remove any 2s from each part of each Ç€ - call Link 1 for each (removes second part if it's palindromic & flattens) LÐṂ - keep only those with minimal length Ƈ - filter keep those for which: $- last two links as a monad: ŒH - split into two E - all equal? Ẹ - any truthy? ḊḢŒḂḤœP⁸F - Link 1: list of parts Ḋ - deueue Ḣ - head -> second part ŒḂ - is palindromic? (1 if so, else 0) Ḥ - double ⁸ - the list of parts œP - partition at index (0œP[4,5,6,7] -> [[4,5,6,7]] while 2œP[4,5,6,7] -> [[4],[6,7]]) F - flatten Wolfram Language (Mathematica), 128...103 101 bytes FreeQ[MinimalBy[$@@d~Drop~#&/@SequencePosition[d=#~IntegerDigits~2,_?PalindromeQ],Length],a__~$~a__]& Try it online! Returns False if the number is not a sporeball number, and True otherwise. d=#~IntegerDigits~2 (* get digits of input, base 2. *) SequencePosition[ % ,_?PalindromeQ] (* get positions of palindromic runs *) d~Drop~#/@ % (* and remove them, *)$@@ % (* placing the remaining digits in $*) MinimalBy[ % ,Length] (* keep the shortest remaining digit lists *) FreeQ[ % ,a__~$~a__] (* and check if they have identical halves. *) $@@ is needed to handle cases like $$\38=100110_2\$$, where removing either of the two longest-palindromes 1001, 0110 has the same result 10. JavaScript (ES6), 175 bytes n=>(m=g=(s,p='',q=p)=>s?g(s.slice(1),p+s[0],q,s==[...s].reverse(L=s.length).join?o=(L<=m?o:!(m=L))|L==m&/^(.+)\1$/.test(p+q):0,g(s.slice(0,-1),p,s[L-1]+q)):o)(n.toString(2)) Try it online! Commented n => ( // n = input m = // initialize m to a non-numeric value g = ( // g is a recursive function taking: s, // s = middle part of the string (the palindromic one) p = '', q = p // p = left part, q = right part ) => // s ? // if s is not empty: g( // outer recursive call: s.slice(1), // with the first character of s removed ... p + s[0], // ... and appended to p q, // with q unchanged s == [...s] // split s .reverse( // reverse it L = s.length // set L = length of s (argument ignored by reverse) ).join ? // join again; if s is a palindrome: o = // update o: ( L <= m ? // if L is not higher than m: o // yield o : // else: !(m = L) // update m to L and yield 0 ) | L == m & // bitwise OR with 1 if L = m (current max.) /^(.+)\1$/ // and the concatenation of p and q can be .test(p + q) // split into 2 identical halves : // else: 0, // abort g( // inner recursive call: s.slice(0, -1), // with the last character of s removed p, // with p unchanged s[L - 1] + q // with the last character of s prepended to q ) // end of inner recursive call ) // end of outer recursive call : // else: o // return o )(n.toString(2)) // initial call to g with s = binary string for n Husk, 28 bytes ▲foE½†!ḋ¹ṠM-ö→kLfoS=↔m!ḋ¹Qŀḋ Try it online! Returns an empty list (which is falsy in Husk) or a nonempty list (which is truthy). Explanation The repeated ḋ feels wasteful but I'm not sure how to get rid of it. Input is a number, say n=357 ▲f(E½)†!ḋ¹ṠM-(→kLf(S=↔m!ḋ¹)Q)ŀḋ Parentheses added for clarity. ḋ Binary digits: D=[1,0,1,1,0,0,1,0,1] ŀ Indices: I=[1,2,3,4,5,6,7,8,9] (→kLf(S=↔m!ḋ¹)Q) Get indices of longest palindromic runs. Q Slices: [[1],[2],[1,2],..,[1,2,..,9]] f Filter by condition: (S=↔m!ḋ¹) Is a palindrome in D. m Map ! indexing into ḋ¹ D (recomputed). S= That equals ↔ its reverse. kL Classify (into separate lists) by length. → Get the last one: [[2,3,4,5],[4,5,6,7]] ṠM- Remove each from I: [[1,6,7,8,9],[1,2,3,8,9]] † Deep map !ḋ¹ indexing into D (recomputed again): [[1,0,1,0,1],[1,0,1,0,1]] f Filter by condition: (E½) Splits into identical halves. ½ Split into halves (if length is odd, first part is longer): [[1,0,1],[0,1]] E All elements are equal: 0 Result is [] ▲ Maximum, or [] if the argument is empty: [] The final result is nonempty iff the last filter keeps a nonempty list. Pip-p, 62 bytes $Q_^@#_/2FIMN(_FI#*Y{c:y@$,:acQRVc?yRAaxx}M$ALCG1+#YTBa)=#_FIy Outputs an empty list for falsey, non-empty for truthy. Try it online! Oy vey. I'm not going to write up a detailed explanation for that monstrosity. Suffice it to say that half the bytecount (roughly the Y{c:y@$,:acQRVc?yRAaxx}M$ALCG1+# section) is required to find all palindromic substrings and remove them. • I want to try writing an explanation for this. May 1 at 8:16 • @Razetime Hey, go for it! May 1 at 16:17 Python 3.8 (pre-release), 154 152 bytes def f(n):s=f'{n:b}';k=len(s);return max((b-a,(r:=s[:a]+s[b:])[:(h:=k-b+a>>1)]==r[h:]>'')for a in range(k)for b in range(a,k+1)if(p:=s[a:b])==p[::-1])[1] Try it online! Commented: s=f'{n:b}' # convert n to a binary string k=len(s) # and take the length return max( ... )[1] # the second element from the maximum of (b-a, # tuples of palindrome length b-a ... [:(h:=k-b+a>>1)] # ... and is the first half (r:=s[:a]+s[b:]) # of the binary string without the palindrome ==r[h:] # equal to the second half >'') # and not equal to the empty string for a in range(k) # for palindrome starting positions a in [0, 1, ..., k-1] for b in range(a,k+1) # for palindrome end indices b in [1, 2, ..., k-a] if(p:=s[a:b])==p[::-1]) # if this is an actual palindrome If there multiple palindromes of same maximum length, max selects the tuple with the highest second value, where True>False. Factor, 209 197 bytes : s ( n -- ? ) >bin dup all-subseqs [ dup reverse = ] filter dup [ last length ] dip [ length over = ] filter nip [ split1 append [ ""= not ] keep dup length 2/ cut = and ] with [ or ] map-reduce ; Try it online! Japt, 33 42 bytes s2 ã fêS üÊo Try it s2 - convert input to binary string ã - substrings fêS - filter palindrome üÊo - take last group by length Vc@ðXà - find indexes of each palindrome in input £jXVÎlà - map those indexes by removing n(=palindr.length) characters from input at index ®òZÊ/2à - split all results d_ - return true if any : ZÎ¥Zo - are == • Fixed : now works for numbers with more identical longest palindromic runs like 2405 => 100101100101 • Test : computes first 1000 terms and check if the result is the same as test cases. • fails on 2405 – att Sep 21 '20 at 2:54 • On second thought, the spec is a little vague about that. I'll ask. – att Sep 21 '20 at 5:07 • @att I think you are right , that's why r replaces every occurrences of X, hence 100101100101 becomes 0101 which gives true, I think OP clearly stated that exactly one run have to be discarded. Thanks for spotting that out. Sep 21 '20 at 5:45 Dotty, 201 bytes s=>((for{j<-1 to s.size i<-0 to j-1 x=s.slice(i,j)if x==x.reverse}yield(i,j))groupBy(_-_)minBy(_._1)_2)exists{(i,j)=>val x=s.slice(0,i)+s.substring(j) x!=""&&x.slice(0,x.size/2)==x.substring(x.size/2)} Try it online (in Scastie) Input must already be a binary string. Dotty, 226 bytes x=>{val s=x.toBinaryString ((for{j<-1 to s.size i<-0 to j-1 x=s.slice(i,j)if x==x.reverse}yield(i,j))groupBy(_-_)minBy(_._1)_2)exists{(i,j)=>val x=s.slice(0,i)+s.substring(j) x!=""&&x.slice(0,x.size/2)==x.substring(x.size/2)}} Try it online (in Scastie) Input is an Int.
{}
# How to return a variable name and value from within a self-contained function block? Abstract: I have viewed other solutions on StackExchange but none actually address using a fully self-contained function to return name and value of a variable. The only function that works requires using SetAttributes[...] outside the function before using it: Display variable name instead of value Goal: to create a completely self-contained function Block[...] or Module[...] that returns variable name and value without requiring using an outside function such as SetAttributes[...]. About the Code: Below is the code for four different attempts. The top two are globally scoped and work fine. Bottom two are written into a function block but do not work. The output is decorated with strings to give you an idea of how I intend to use it. varname=123; (* this works *) Row[{"Global Scope Defer: ","the name is ",Defer@varname," and value is -> ",varname}] (*this works *) SetAttributes[ShowName,HoldAll]; ShowName[name_]:=Row[{"Global Scope SetAttributes HoldForm: ","the name is ",HoldForm@name," and value is -> ",ReleaseHold@name}]; ShowName[varname] (* this doesn't work *) showDeferFunction[var_]:=Block[{pre,arrow}, pre="the name is "; arrow=" and value is -> "; Row[{"Functional Scoping Defer: ",pre,Defer@var,arrow,var}] ]; showDeferFunction[varname] (* this doesn't work either *) showHoldFormFunction[var_]:=Block[{pre,arrow}, pre="the name is "; arrow=" and value is -> "; SetAttributes[ShowName,HoldAll]; ShowName[name_]:=Row[{"Functional Scoping SetAttributes HoldForm: "pre,HoldForm@name,arrow,ReleaseHold@name}]; ShowName[var] ]; showHoldFormFunction[varname] The Output: Global Scope Defer: the name is varname and value is -> 123 Global Scope SetAttributes HoldForm: the name is varname and value is -> 123 Functional Scoping Defer: the name is 123 and value is -> 123 Functional Scoping SetAttributes HoldForm: the name is 123 and value is -> 123 • Something like xxx = 1; Function[Null, Row[{Defer[#1], " = ", #1}], {HoldAll}][xxx]? – J. M.'s ennui May 23 '20 at 7:13 • @J.M. thank you that is close but I am looking for a named function that would look something like below (example doesn't work) so i could re-use it without having to copy the function body each time: fn[var_] := Function[Null, Row[{Defer[#1], " = ", #1}], {HoldAll}][var]; – Jules Manson May 23 '20 at 7:24 • You don't need var; fn = Function[(* stuff *)] suffices. – J. M.'s ennui May 23 '20 at 9:15
{}
What is the reason behind the phenomenon of Joule-Thomson effect? For an ideal gas there is no heating or cooling during an adiabatic expansion or contraction, but for real gases, an adiabatic expansion or contraction is generally accompanied by a heating or cooling effect. What is the reason behind such a phenomenon? Is it related to the property of real gases or is it something else? 1 Answer In a reversible adiabatic expansion or compression, the temperature of an ideal gas does change. In a Joule-Thompson type of irreversible adiabatic expansion (e.g., in a closed container), the internal energy of the gas does not change. For an ideal gas, its internal energy depends only on its temperature. So, for an irreversible adiabatic expansion of an ideal gas in a closed container, its temperature does not change. But, the internal energy of a real gas depends not only on its temperature but also on its specific volume (which increases in an expansion). So, for a real gas, its temperature changes. The Joule-Thompson effect is one measure of the deviation of a gas from ideal gas behavior. ADDENDUM This addresses a comment from the OP regarding the effect of specific volume on the internal energy of a real gas. Irrespective of the Joule-Thompson effect, one can show (using a combination of the first and second laws of thermodynamics) that, for a pure real gas, liquid, or solid (or one of constant chemical composition), the variation of specific internal energy with respect to temperature and specific volume is given by: $$dU=C_vdT-\left[P-T\left(\frac{\partial P}{\partial T}\right)_V\right]dV$$The first term describes the variation with respect to temperature and the second term describes the variation with respect to specific volume. For an ideal gas, the second term is equal to zero. However, for a real gas, the second term is not equal to zero, and that means that, at constant internal energy (as in the Joule-Thompson effect), the temperature will change when the specific volume changes. This is a direct result of the deviation from ideal gas behavior. • Could you elaborate on the internal energy dependency of real gases in the Joule-Thompson effect? – J_B892 Mar 21 '18 at 8:18 • See my Addendum. – Chet Miller Mar 21 '18 at 12:15 • It's my understanding that, in a Joule-Thompson expansion, the internal energy can change, and what stays constant is the enthalpy, i.e., U + PV. – theorist Jan 10 at 22:23 • @theorist There are actually two versions of JT. One is the version you referred to involving steady flow through a porous plug or valve. The other version is a closed system containing two chambers separated by a partition. The initial pressures in the two chambers are unequal, and the partition is either totally removed or punctured. In this case, the total internal energy is constant. – Chet Miller Jan 10 at 23:35 • I believe what you were initially describing is typically referred to as a Joule expansion, as distinguished from a Joule-Thomson expansion. At least that's how I've always seen the two distinguished (e.g., www-thphys.physics.ox.ac.uk/people/AlexanderSchekochihin/A1/…) (though that author really shouldn't be putting deltas in front of W or Q). – theorist Jan 11 at 1:27
{}
# Homework Help: Invariant mass problem 1. Feb 16, 2010 ### wakko101 We have a collision involving a Kaon plus and proton initially resulting in the same plus a neutral pion (ie. Kp to Kp(pi)). The question asks to calculate the invariant mass of just the outgoing kaon and pion, given the outgoing momenta of the particles, the angle between them and their masses. Do I have to take into account the mass of the proton when I'm calculating this, or can I simply add (E1 + E2)^2 and (p1 + p2)^2 (ie the masses and momentum of the two relevant particles) according to the invariant mass equation? The value I'm getting now seems too large, in the region of 10 GeV/c^2. Any help/suggestions would be appreciated. Cheers, W. 2. Feb 17, 2010 ### diazona The formula is $$m^2 = E^2 - p^2$$ isn't it? So you should subtract those two quantities, not add them, but otherwise I think it should work.
{}
# Math Help - Intermediate value in how many points? 1. ## Intermediate value in how many points? According to intermediate value theorem if f is continuous on interval I , a,b belongs to I and f(a) doesn't equal to f(b),then f takes any value between f(a) and f(b) in a point between a and b. My question is : is it possible that f takes each value in an infinite number of points ? 2. I don't understand the question but maybe looking at the simple graphs of x^2 and x^3 can answer your question. The domain of x^2 is all real numbers but its range is all nonnegatives while that of x^3 is all real numbers and so is its range. 3. Originally Posted by mazaheri According to intermediate value theorem if f is continuous on interval I , a,b belongs to I and f(a) doesn't equal to f(b),then f takes any value between f(a) and f(b) in a point between a and b. My question is : is it possible that f takes each value in an infinite number of points ? If a and b are $\pm \infty$, sure. Look at the sine function. -Dan 4. ## Intermediate value in how many points? I mean regarding your example x^2 takes 1 in -1 or 1 (-1 & 1 belongs to (a,b) ) 2 points . On the other hand sin (x) takes 1 in pi/2, 5pi/2,9pi/2, 3 points . (all belongs to supposed (a,b).f(x)=xsin(1/x) x#0 ,f(0)=0 assuming (a,b)=(-2,2) takes 0 in infinite numbers of points .My question is:is it possible that all range value be taken in infinite points such as the last example?
{}
# Search Results 1. Post ### CS1A September 2019 9(ix a) Yes, the question isn't clear. The examiners wanted the variance of the distribution of X for the various values of theta. Post by: John Lee, Apr 18, 2022 in forum: CS1 2. Post ### Time Series -Chap13: "Autoregressive Model more convenient than Moving Average" I believe, though I could be wrong, that it is more convenient because it depends on past observable values - whereas the MA doesn't have that... Post by: John Lee, Apr 11, 2022 in forum: CS2 3. Post ### PBOR CS1 Truncated notes in the text files Are you referring to the theoretical distribution of the prior p~beta(2,3)? In which case the mean is 2/(2+3) = 0.4 and the variance is exactly... Post by: John Lee, Mar 18, 2022 in forum: CS1 4. Post ### PBOR CS1 Truncated notes in the text files They're given in "text format" as they are .R files which can be run in R, rather than PDFs or word documents which can't. Whilst they're fine in... Post by: John Lee, Mar 17, 2022 in forum: CS1 5. Post ### CS1-09 Hypothesis Testing Post by: John Lee, Mar 11, 2022 in forum: CS1 6. Post ### Q&A Bank Part 4 qstn 4.3 (Continuity Correction) Post by: John Lee, Jan 25, 2022 in forum: CT3 7. Post ### Q&A Bank Part 4 qstn 4.3 (Continuity Correction) So this "towards the mean" is only used in hypothesis tests in Chapter 10. This is because when we calculate the p-value for the binomial/Poisson... Post by: John Lee, Jan 24, 2022 in forum: CT3 8. Post ### CS1 exam reference material during exam Yes. Post by: John Lee, Oct 11, 2021 in forum: CS1 9. Post It's always worth dropping an email to the examinations team at the IFoA. Post by: John Lee, Sep 17, 2021 in forum: General study / exams 10. Post ### Converting from RStudio to word Post by: John Lee, Sep 7, 2021 in forum: CS1 11. Post ### Is copying and pasting from R/Excel to Word allowed? Part of the copy/paste ban is to prevent students using pre-written solutions or, obviously, copying other people's work. You still have to be... Post by: John Lee, Aug 30, 2021 in forum: General study / exams 12. Post ### To what extent are we allowed to use R/graphics calculator in the CS1A exam? The examiners have made it very clear that working needs to be shown to receive all the marks. For distributions, you should also only use... Post by: John Lee, Aug 18, 2021 in forum: CS1 13. Post ### Mathematical Equation--CM2 You can - but it's not recommended because of the time it takes to write equations in equation mode. Post by: John Lee, Jul 2, 2021 in forum: General study / exams 14. Post ### "PriceWalking": a failure that happened on the IFoA's watch Personal view: This is hardly a scandal. It's a standard business approach of using a loss leader or just a cheaper price to get new business... Post by: John Lee, Jun 7, 2021 in forum: Off-topic 15. Post ### Question regarding referencing for Qs Option 1 seems sensible. Post by: John Lee, Apr 19, 2021 in forum: General study / exams 16. Post Yes, but don't include your name - markers aren't supposed to know! Post by: John Lee, Apr 8, 2021 in forum: General study / exams ### RStudio v1.4 copy and paste issues It appears the newest version of RStudio (1.4) makes copying and paste output from the console into Word more difficult. You'll need to use... Thread by: John Lee, Apr 8, 2021, 0 replies, in forum: CS2 ### RStudio v1.4 copy and paste issues It appears the newest version of RStudio (1.4) makes copying and paste output from the console into Word more difficult. You'll need to use... Thread by: John Lee, Apr 8, 2021, 0 replies, in forum: CS1 19. Post ### assessment regulations: copy-paste and modifying You are welcome to address this to the IFoA's examinations team (who are responsible for the exams - rather than ActEd or students on this forum)... Post by: John Lee, Apr 1, 2021 in forum: CS1 20. Post ### Notation Nope. Although, there is a general nesting order of brackets: { [ ( Post by: John Lee, Mar 19, 2021 in forum: CS1 21. Post ### Changing to a different SP exam SP1 overlaps the syllabus of SP2 so there would be some natural synergy. Post by: John Lee, Jan 4, 2021 in forum: General study / exams 22. Post ### Exam script review and analysjs Just compare it with the SP2 solutions which are available for free on the Profession's website:... Post by: John Lee, Jan 4, 2021 in forum: General study / exams 23. Post ### Two population proportion - Binomial case You're using a normal approximation to the binomial and then doing the subtraction of two normal distributions (see p41 of chapter 4). Post by: John Lee, Sep 9, 2020 in forum: CS1 24. Post ### Coefficients of Lower and Upper Tail dependencies (Copulas CS2 Chapter 12) There's only a definition for the positive interdependence as that is what we, as actuaries, are worried about. eg extreme events lead to greater... Post by: John Lee, Sep 7, 2020 in forum: CS2 25. Post ### cs1b chap 7 point estimation , example 7.1 You stored it in children but then did c <-Children\$V1 And R is case sensitive. Post by: John Lee, Aug 23, 2020 in forum: CS1 26. Post ### We need further guidance on aswering questions on Microsoft Word They have already given us more than 3 months notice. This allows time for the examiners to rewrite the schedule so there is less maths and then... Post by: John Lee, May 30, 2020 in forum: General study / exams 27. Post ### BPP VLE Practice Exam Post by: John Lee, Apr 14, 2020 in forum: Off-topic 28. Post ### BPP VLE Practice Exam Which subject are you referring to Zair? Post by: John Lee, Apr 14, 2020 in forum: Off-topic 29. Post ### uk april 2005 exam question 9 part ii-a - ct6 - glm In part (ii) it tells you that $$\log \mu_i = \alpha$$ for $$i=1,2,...m$$ So you are just replacing the $$\mu_i$$'s with $$e^{\alpha}$$ Post by: John Lee, Mar 30, 2020 in forum: CS1 30. Post ### Ch 13 question number 13.9 part (iv) Model 2 has 5 parameters. Post by: John Lee, Mar 26, 2020 in forum: CS1 31. Post ### Coronavirus effect on exams You need to contact the examinations team at the IFoA directly. Post by: John Lee, Mar 12, 2020 in forum: General study / exams 32. Post ### Mean response vs individual response Probably a bit late for an answer now - but it's talking about the predicted value. The mean predicted value is on the regression line (a... Post by: John Lee, Sep 17, 2019 in forum: CS1 33. Post ### Ch 13 scaled deviance and AIC We only do this for members of the exponential family - so Poisson, normal, gamma, exponential and binomial. Post by: John Lee, Sep 3, 2019 in forum: CS1 34. Post ### Ch 12 Practice questions Q12.9 (iv) You subtract the number of new parameters added each time between the models Post by: John Lee, Sep 3, 2019 in forum: CS1 35. Post ### CH8 error in def monotonic decreasing? I'm a little confused. It is the opposite of monotonic increasing so I'm not sure how this can be an error... Post by: John Lee, Jun 19, 2019 in forum: CS1 36. Post ### Choosing a SP purely on academic strengths ST6 has some stonking maths (which is why I took it) - similar to CT4 and CT8. Post by: John Lee, Jan 7, 2019 in forum: General study / exams 37. Post ### April 2016 Question 6 Our d3 and d4 are labelled differently. Also we have given answers as % rather than euros. Post by: John Lee, Sep 11, 2018 in forum: CT6 38. Post ### Mathematical Properties of Poisson Process It basically means that when we set up our derivative from first principles and thus divide by h, then these terms will disappear and so don't... Post by: John Lee, Sep 11, 2018 in forum: CT3 39. Post ### Revision notes booklet 3 practice qn 5 We want E(SI) = E(N)E(Y) the N is unchanged by the proportional reinsurance - it only affects how much you pay. E(Y)= 0.9E(X) Post by: John Lee, Aug 15, 2018 in forum: CT6 40. Post ### Random variation vs. White noise process Essentially we are removing everything that is "obviously" not stationary to the naked eye. Post by: John Lee, Jul 23, 2018 in forum: CT6 41. Post ### Interpretation of the constant in AR(P) process. 1. Mu would be there but would be a function of time. 2. Because we like to know the mean. 3. Even if it were a constant then it would still add... Post by: John Lee, Jul 23, 2018 in forum: CT6 42. Post ### Q&A Bank Part 1 Q1.1(ii) Good point. We could have eliminated all the other strategies through domination to achieve the same answer. Post by: John Lee, Jul 23, 2018 in forum: CT6 43. Post ### Chapter 10 Exam Type Question (1-F(X))^n gives [P(X>x)]^n = P(Xmin > X) But we want the CDF of Xmin, hence we want P(Xmin <x) and thus we need the 1- Post by: John Lee, Jun 19, 2018 in forum: CT3 44. Post ### Risk parameter under EBCT model 1 Yes, which is why we always fix the value of theta. eg $$E(X|\theta)$$ Post by: John Lee, Jun 19, 2018 in forum: CT6 45. Post ### Urgent help please- conditional probability Personally I would stuff the whole conditional worries and draw a tree diagram. It's much much easier to work from this. Post by: John Lee, Apr 18, 2018 in forum: CT3 46. Post ### CT6 April 2008 Question 4 iii) Deviance residual is calculated for each individual data value - not for the whole sum. Post by: John Lee, Apr 18, 2018 in forum: CT6
{}
# Toric ideal of slice of a polytope? Given a collection $A:=\{a_1, \ldots ,a_n \}$ of different integer points in $\mathbb{N}^d$, which span an affine hyperplane when viewed in $\mathbb{R}^d$, one can define a toric ideal $I_A$ from a monomial homomophism: \begin{align} \phi_A\colon k[x_1,\ldots, x_n]&\to k[y_1,\ldots, y_d],\\ x_i&\mapsto y^{a_i}:=\prod_{j=1}^{d}y_j^{a_{i,j}} \end{align} according to $I_A:=\ker \phi_A$ (suppose $k$ is algebraically closed of characteristic zero). Let $H$ be a hyperplane in $k^d$ such that $conv(A)$ (when viewed as a polytope in $\mathbb{R}^d$) has integer (or at least rational) vertices, and call $A\cap H$ the resulting point configuration, consisting of the vertices of $A\cap H$. To rule out known situations, suppose $H$ further that intersects the interior of $A$. Question: Carrying out the construction for the toric ideal with the input point configuration $A\cap H$, is there a relation between $I_{A\cap H}, I_A$ and $H$? Or are there hypothesis that can lead to a relation between the two ideals (or the corresponding rings)? - There may be distinct ways of viewing this topic, but the way I am familiar with it we have that the monomial homomorphism is defined by $$\phi_{A} : k[x_{1},...,x_{n}] \rightarrow k[t_{1},...,t_{d},t_{1}^{-1},...,t_{d}^{-1}] \\ \phi(x_{i}) \mapsto \mathbf{t}^{a_{i}}:=\prod_{j=1}^{d} t_{j}^{a_{j,i}}, \forall 1 \leq i \leq n.$$ My suggestion is to try using the rational polyhedral cone $$\text{pos}_{\mathbb{Q}}((a_{1},...,a_{n})) = \left\{ \sum_{i=1}^{n} \lambda_{i}a_{i} \; | \; \lambda_{i} \in \mathbb{Q}_{\geq 0}\right\}$$ attached to the toric variety $$V(I_{A}) = \{(u_{1},...,u_{n}) \in k^{n} \; | \; F(u_{1},...,u_{n})=0, \forall F \in I_{A}\}$$ where $I_{A}=\ker(\phi_{A})$ is the toric ideal. In particular, my intuitive idea is that you construct $I_{A}$ and then pass to the toric variety $V(I_{A})$ attached to $I_{A}$ (which is in your case an affine monomial curve if you choose $a_{1}<\cdot\cdot\cdot<a_{n}$ as relatively prime positive integers, think of it in this case before you generalize!). Now, constructing the polyhedral cone from the toric variety $V(I_{A})$ will allow you to have some sort of bound to how the points of $A$, and hence of $\text{conv}(A)$ will behave. In particular, I think you will be able to define $A$ and $A \cap H$ as subsets of the polyhedral cone, and that there is an associated height of the cone for which we define the hyperplane $H$ so that $$\text{conv}\left((A \cap H)\cup \bigcup_{i=1}^{n}\chi_{H}(a_{i})\right) \subset \text{pos}_{\mathbb{Q}}((a_{1},...,a_{n}))\big|_{h}$$ where $\chi_{H}: \mathbb{N}^{d} \rightarrow \mathbb{N}^d$ is an indicator function defined in terms of the hyperplane $H$ (and a chosen orientation for a normal) which will equal $a_{i}$ when the point is on the desired side of the hyperplane (bounding the polyhedral cone to a subset with finite metric quantities) and $1$ when the point is on the undesired side of the hyperplane (the unbounded region). I am using the $\big|_{h}$ on the rational polyhedral cone to denote the restriction unto the height $h$ induced by your choice of $H$. Using some of these ideas and intuitions I recommend that you try and construct the toric varieties and rational polyhedral cones attached to $I_{A}$ and $I_{A \cap H}$ in order to understand their relationship as toric ideals. Thanks for your answer. Perhaps I've misread it, but as I defined the integer points $A$ they define an arbitrary affine toric variety, rather than a monomial curve as you say. Anyhow, I have started a bounty since I'm more interested in an answer of the sort "under these hypotheses..., the precise relation is this:..." or "it's a mess and there is no visible relation between both ideals". –  Camilo Sarmiento Jan 31 '14 at 18:24
{}