text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
# What is the square root of 150 in simplified radical form?
$\sqrt{150} = 5 \sqrt{6}$
Since $150 = 25 \cdot 6$,
$\sqrt{150} = \sqrt{25 \cdot 6} = \sqrt{25} \cdot \sqrt{6} = 5 \sqrt{6}$
|
{}
|
# Tag: blackjack
About a year ago I did a series of posts on games associated to the Mathieu sporadic group $M_{12}$, starting with a post on Conway’s puzzle M(13), and, continuing with a discussion of mathematical blackjack. The idea at the time was to write a book for a general audience, as discussed at the start of the M(13)-post, ending with a series of new challenging mathematical games. I asked : “What kind of puzzles should we promote for mathematical thinking to have a fighting chance to survive in the near future?”
Now, Scientific American has (no doubt independently) taken up this lead. Their July 2008 issue features the article Rubik’s Cube Inspired Puzzles Demonstrate Math’s “Simple Groups” written by Igor Kriz and Paul Siegel.
By far the nicest thing about this article is that it comes with three online games based on the sporadic simple groups, the Mathieu groups $M_{12}$, $M_{24}$ and the Conway group $.0$.
the M(12) game
Scrambles to an arbitrary permutation in $M_{12}$ and need to use the two generators $INVERT=(1,12)(2,11)(3,10)(4,9)(5,8)(6,7)$ and $MERGE=(2,12,7,4,11,6,10,8,9,5,3)$ to return to starting position.
Here is the help-screen :
They promise the solution by july 27th, but a few-line GAP-program cracks the puzzle instantly.
the M(24) game
Similar in nature, again using two generators of $M_{24}$. GAP-solution as before.
This time, they offer this help-screen :
the .0 game
Their most original game is based on Conway’s $.0$ (dotto) group. Unfortunately, they offer only a Windows-executable version, so I had to install Bootcamp and struggle a bit with taking screenshots on a MacBook to show you the game’s starting position :
Dotto:
Dotto, our final puzzle, represents the Conway group Co0, published in 1968 by mathematician John H. Conway of Princeton University. Co0 contains the sporadic simple group Co1 and has exactly twice as many members as Co1. Conway is too modest to name Co0 after himself, so he denotes the group “.0” (hence the pronunciation “dotto”).
In Dotto, there are four moves. This puzzle includes the M24 puzzle. Look at the yellow/blue row in the bottom. This is, in fact, M24, but the numbers are arranged in a row instead of a circle. The R move is the “circle rotation to the right”: the column above the number 0 stays put, but the column above the number 1 moves to the column over the number 2 etc. up to the column over the number 23, which moves to the column over the number 1. You may also click on a column number and then on another column number in the bottom row, and the “circle rotation” moving the first column to the second occurs. The M move is the switch, in each group of 4 columns separated by vertical lines (called tetrads) the “yellow” columns switch and the “blue” columns switch. The sign change move (S) changes signs of the first 8 columns (first two tetrads). The tetrad move (T) is the most complicated: Subtract in each row from each tetrad 1/2 times the sum of the numbers in that tetrad. Then in addition to that, reverse the signs of the columns in the first tetrad.
Strategy hints: Notice that the sum of squares of the numbers in each row doesn’t change. (This sum of squares is 64 in the first row, 32 in every other row.) If you manage to get an “8”in the first row, you have almost reduced the game to M24 except those signs. To have the original position, signs of all numbers on the diagonal must be +. Hint on signs: if the only thing wrong are signs on the diagonal, and only 8 signs are wrong, those 8 columns can be moved to the first 8 columns by using only the M24 moves (M,R).
MUBs (for Mutually Unbiased Bases) are quite popular at the moment. Kea is running a mini-series Mutual Unbias as is Carl Brannen. Further, the Perimeter Institute has a good website for its seminars where they offer streaming video (I like their MacromediaFlash format giving video and slides/blackboard shots simultaneously, in distinct windows) including a talk on MUBs (as well as an old talk by Wootters).
So what are MUBs to mathematicians? Recall that a d-state quantum system is just the vectorspace $\mathbb{C}^d$ equipped with the usual Hermitian inproduct $\vec{v}.\vec{w} = \sum \overline{v_i} w_i$. An observable $E$ is a choice of orthonormal basis ${ \vec{e_i} }$ consisting of eigenvectors of the self-adjoint matrix $E$. $E$ together with another observable $F$ (with orthonormal basis ${ \vec{f_j} }$) are said to be mutally unbiased if the norms of all inproducts $\vec{f_j}.\vec{e_i}$ are equal to $1/\sqrt{d}$. This definition extends to a collection of pairwise mutually unbiased observables. In a d-state quantum system there can be at most d+1 mutually unbiased bases and such a collection of observables is then called a MUB of the system. Using properties of finite fields one has shown that MUBs exists whenever d is a prime-power. On the other hand, existence of a MUB for d=6 still seems to be open…
The King’s Problem (( actually a misnomer, it’s more the poor physicists’ problem… )) is the following : A physicist is trapped on an island ruled by a mean
king who promises to set her free if she can give him the answer to the following puzzle. The
physicist is asked to prepare a d−state quantum system in any state of her choosing and give it
to the king, who measures one of several mutually unbiased observables on it. Following this, the physicist is allowed to make a control measurement
on the system, as well as any other systems it may have been coupled to in the preparation
phase. The king then reveals which observable he measured and the physicist is required
to predict correctly all the eigenvalues he found.
The Solution to the King’s problem in prime power dimension by P. K. Aravind, say for $d=p^k$, consists in taking a system of k object qupits (when $p=2l+1$ one qupit is a spin l particle) which she will give to the King together with k ancilla qupits that she retains in her possession. These 2k qupits are diligently entangled and prepared is a well chosen state. The final step in finding a suitable state is the solution to a pure combinatorial problem :
She must use the numbers 1 to d to form $d^2$ ordered sets of d+1 numbers each, with repetitions of numbers within a set allowed, such that any two sets have exactly one identical number in the same place in both. Here’s an example of 16 such strings for d=4 :
11432, 12341, 13214, 14123, 21324, 22413, 23142, 24231, 31243, 32134, 33421, 34312, 41111, 42222, 43333, 44444
Here again, finite fields are used in the solution. When $d=p^k$, identify the elements of $\mathbb{F}_{p^k}$ with the numbers from 1 to d in some fixed way. Then, the $d^2$ of number-strings are found as follows : let $k_0,k_1 \in \mathbb{F}_{p^k}$ and take as the first 2 numbers the ones corresponding to these field-elements. The remaning d-2 numbers in the string are those corresponding to the field element $k_m$ (with $2 \leq m \leq d$) determined from $k_0,k_1$ by the equation
$k_m = l_{m} * k_0+k_1$
where $l_i$ is the field-element corresponding to the integer i ($l_1$ corresponds to the zero element). It is easy to see that these $d^2$ strings satisfy the conditions of the combinatorial problem. Indeed, any two of its digits determine $k_0,k_1$ (and hence the whole string) as it follows from
$k_m = l_m k_0 + k_1$ and $k_r = l_r k_0 + k_1$ that $k_0 = \frac{k_m-k_r}{l_m-l_r}$.
In the special case when d=3 (that is, one spin 1 particle is given to the King), we recover the tetracode : the nine codewords
0000, 0+++, 0—, +0+-, ++-0, +-0+, -0-+, -+0-, –+0
encode the strings (with +=1,-=2,0=3)
3333, 3111, 3222, 1312, 1123, 1231, 2321, 2132, 2213
Conway’s puzzle M(13) involves the 13 points and 13 lines of $\mathbb{P}^2(\mathbb{F}_3)$. On all but one point numbered counters are placed holding the numbers 1,…,12 and a move involves interchanging one counter and the ‘hole’ (the unique point having no counter) and interchanging the counters on the two other points of the line determined by the first two points. In the picture on the left, the lines are respresented by dashes around the circle in between two counters and the points lying on this line are those that connect to the dash either via a direct line or directly via the circle. In the first part we saw that the group of all reachable positions in Conway’s M(13) puzzle having the hole at the top positions contains the sporadic simple Mathieu group $M_{12}$ as a subgroup. To see the reverse inclusion we have to recall the definition of the ternary Golay code named in honour of the Swiss engineer Marcel Golay who discovered in 1949 the binary Golay code that we will encounter _later on_.
The ternary Golay code $\mathcal{C}_{12}$ is a six-dimenional subspace in $\mathbb{F}_3^{\oplus 12}$ and is spanned by its codewords of weight six (the Hamming distance of $\mathcal{C}_{12}$ whence it is a two-error correcting code). There are $264 = 2 \times 132$ weight six codewords and they can be obtained from the 132 hexads, we encountered before as the winning positions of Mathieu’s blackjack, by replacing the stars by signs + or – using the following rules. By a tet (from tetracodeword) we mean a 3×4 array having 4 +-signs indicating the row-positions of a tetracodeword. For example
$~\begin{array}{|c|ccc|} \hline & + & & \\ + & & + & \\ & & & + \\ \hline + & 0 & + & – \end{array}$ is the tet corresponding to the bottom-tetracodeword. $\begin{array}{|c|ccc|} \hline & + & & \\ & + & & \\ & + & & \\ \hline & & & \end{array}$ A col is an array having +-signs along one of the four columns. The signed hexads will now be the hexads that can be written as $\mathbb{F}_3$ vectors as (depending on the column-distributions of the stars in the hexad indicated between brackets)
$col-col~(3^20^2)\qquad \pm(col+tet)~(31^3) \qquad tet-tet~(2^30) \qquad \pm(col+col-tet)~(2^21^2)$
For example, the hexad on the right has column-distribution $2^30$ so its signed versions are of the form tet-tet. The two tetracodewords must have the same digit (-) at place four (so that they cancel and leave an empty column). It is then easy to determine these two tetracodewords giving the signed hexad (together with its negative, obtained by replacing the order of the two codewords)
$\begin{array}{|c|ccc|} \hline \ast & \ast & & \\ \ast & & \ast & \\ & \ast & \ast & \\ \hline – & + & 0 & – \end{array}$ signed as
$\begin{array}{|c|ccc|} \hline + & & & \\ & & & \\ & + & + & + \\ \hline 0 & – & – & – \end{array} – \begin{array}{|c|ccc|} \hline & + & & \\ + & & + & \\ & & & + \\ \hline + & 0 & + & – \end{array} = \begin{array}{|c|ccc|} \hline + & – & & \\ – & & – & \\ & + & + & \\ \hline – & + & 0 & – \end{array}$
and similarly for the other cases. As Conway&Sloane remark ‘This is one of many cases when the process is easier performed than described’.
We have an order two operation mapping a signed hexad to its negative and as these codewords span the Golay code, this determines an order two automorphism of $\mathcal{C}_{12}$. Further, forgetting about signs, we get the Steiner-system S(5,6,12) of hexads for which the automorphism group is $M_{12}$ hence the automorphism group op the ternary Golay code is $2.M_{12}$, the unique nonsplit central extension of $M_{12}$.
Right, but what is the connection between the Golay code and Conway’s M(13)-puzzle which is played with points and lines in the projective plane $\mathbb{P}^2(\mathbb{F}_3)$? There are 13 points $\mathcal{P}$ so let us consider a 13-dimensional vectorspace $X=\mathbb{F}_3^{\oplus 13}$ with basis $x_p~:~p \in \mathcal{P}$. That is a vector in X is of the form $\vec{v}=\sum_p v_px_p$ and consider the ‘usual’ scalar product $\vec{v}.\vec{w} = \sum_p v_pw_p$ on X. Next, we bring in the lines in $\mathbb{P}^2(\mathbb{F}_3)$.
For each of the 13 lines l consider the vector $\vec{l} = \sum_{p \in l} x_p$ with support the four points lying on l and let $\mathcal{C}$ be the subspace (code) of X spanned by the thirteen vectors $\vec{l}$. Vectors $\vec{c},\vec{d} \in \mathcal{C}$ satisfy the remarkable identity $\vec{c}.\vec{d} = (\sum_p c_p)(\sum_p d_p)$. Indeed, both sides are bilinear in $\vec{c},\vec{d}$ so it suffices to check teh identity for two line-vectors $\vec{l},\vec{m}$. The right hand side is then 4.4=16=1 mod 3 which equals the left hand side as two lines either intersect in one point or are equal (and hence have 4 points in common). The identity applied to $\vec{c}=\vec{d}$ gives us (note that the squares in $\mathbb{F}_3$ are {0,1}) information about the weight (that is, the number of non-zero digits) of codewords in $\mathcal{C}$
$wt(\vec{c})~mod(3) = \sum_p c_p^2 = (\sum_p c_p)^2 \in \{ 0,1 \}$
Let $\mathcal{C}’$ be the collection of $\vec{c} \in \mathcal{C}$ of weight zero (modulo 3) then one can verify that $\mathcal{C}’$ is the orthogonal complement of $\mathcal{C}$ with respect to the scalar product and that the dimension of $\mathcal{C}$ is seven whereas that of $\mathcal{C}’$ is six.
Now, let for a point p be $\mathcal{G}_p$ the restriction of
$\mathcal{C}_p = \{ c \in \mathcal{C}~|~c_p = – \sum_{q \in \mathcal{P}} c_q \}$
to the coordinates of $\mathcal{P} – \{ p \}$, then $\mathcal{G}_p$ is clearly a six dimensional code in a 12-dimensional space. A bit more work shows that $\mathcal{G}_p$ is a self-dual code with minimal weight greater or equal to six, whence it must be the ternary Golay code! Now we are nearly done. _Next time_ we will introduce a reversi-version of M(13) and use the above facts to deduce that the basic group of the Mathieu-groupoid indeed is the sporadic simple group $M_{12}$.
References
Robert L. Griess, “Twelve sporadic groups” chp. 7 ‘The ternary Golay code and $2.M_{12}$’
John H. Conway and N. J.A. Sloane, “Sphere packings, lattices and groups” chp 11 ‘The Golay codes and the Mathieu groups’
John H. Conway, Noam D. Elkies and Jeremy L. Martin, ‘The Mathieu group $M_{12}$ and its pseudogroup extension $M_{13}$’ arXiv:math.GR/0508630
If you only tune in now, you might want to have a look at the definition of Mathieu’s blackjack and the first part of the proof of the Conway-Ryba winning strategy involving the Steiner system S(5,6,12) and the Mathieu sporadic group $M_{12}$.
We’re trying to disprove the existence of misfits, that is, of non-hexad positions having a total value of at least 21 such that every move to a hexad would increase the total value. So far, we succeeded in showing that such a misfit must have the patern
$\begin{array}{|c|ccc|} \hline 6 & III & \ast & 9 \\ 5 & II & 7 & . \\ IV & I & 8 & . \\ \hline & & & \end{array}$
That is, a misfit must contain the 0-card (queen) and cannot contain the 10 or 11(jack) and must contain 3 of the four Romans. Now we will see that a misfit also contains precisely one of {5,6} (and consequently also exactly one card from {7,8,9}). To start, it is clear that it cannot contain BOTH 5 and 6 (then its total value can be at most 20). So we have to disprove that a misfit can miss {5,6} entirely (and so the two remaining cards (apart from the zero and the three Romans) must all belong to {7,8,9}).
Lets assume the misfit misses 5 and 6 and does not contain 9. Then, it must contain 4 (otherwise, its column-distribution would be (0,3,3,0) and it would be a hexad). There are just three such positions possible
$\begin{array}{|c|ccc|} \hline . & \ast & \ast & . \\ . & \ast & \ast & . \\ \ast & . & \ast & . \\ \hline – & – & ? & ? \end{array}$ $\begin{array}{|c|ccc|} \hline . & \ast & \ast & . \\ . & . & \ast & . \\ \ast & \ast & \ast & . \\ \hline – & + & ? & ? \end{array}$ $\begin{array}{|c|ccc|} \hline . & . & \ast & . \\ . & \ast & \ast & . \\ \ast & \ast & \ast & . \\ \hline – & 0 & ? & ? \end{array}$
Neither of these can be misfits though. In the first one, there is an 8->5 move to a hexad of smaller total value (in the second a 7->5 move and in the third a 7->6 move). Right, so the 9 card must belong to a misfit. Assume it does not contain the 4-card, then part of the misfit looks like (with either a 7- or an 8-card added)
$\begin{array}{|c|ccc|} \hline . & \ast & \ast & \ast \\ . & \ast & ? & . \\ . & \ast & ? & . \\ \hline & & & \end{array}$ contained in the unique hexad $\begin{array}{|c|ccc|} \hline \ast & \ast & \ast & \ast \\ . & \ast & & . \\ . & \ast & & . \\ \hline & & & \end{array}$
Either way the moves 7->6 or 8->6 decrease the total value, so it cannot be a misfit. Therefore, a misfit must contain both the 4- and 9-card. So it is of the form on the left below
$\begin{array}{|c|ccc|} \hline . & ? & \ast & \ast \\ . & ? & ? & . \\ \ast & ? & ? & . \\ \hline & & & \end{array}$ $\begin{array}{|c|ccc|} \hline . & . & \ast & . \\ . & \ast & \ast & \ast \\ \ast & \ast & . & . \\ \hline – & 0 & – & + \end{array}$ $\begin{array}{|c|ccc|} \hline . & . & \ast & \ast \\ . & \ast & \ast & . \\ \ast & \ast & . & . \\ \hline & & & \end{array}$
If this is a genuine misfit only the move 9->10 to a hexad is possible (the move 9->11 is not possible as all BUT ONE of {0,1,2,3,4} is contained in the misfit). Now, the only hexad containing 0,4,10 and 2 from {1,2,3} is in the middle, giving us what the misfit must look like before the move, on the right. Finally, this cannot be a misfit as the move 7->5 decreases the total value.
That is, we have proved the claim that a misfit must contain one of {5,6} and one of {7,8,9}. Right, now we can deliver the elegant finishing line of the Kahane-Ryba proof. A misfit must contain 0 and three among {1,2,3,4} (let us call the missing card s), one of $5+\epsilon$ with $0 \leq \epsilon \leq 1$ and one of $7+\delta$ with $0 \leq \delta \leq 2$. Then, the total value of the misfit is
$~(0+1+2+3+4-s)+(5+\epsilon)+(7+\delta)=21+(1+\delta+\epsilon-s)$
So, if this value is strictly greater than 21 (and we will see in a moment is has to be if it is at least 21) then we deduce that $s < 1 + \delta + \epsilon \leq 4$. Therefore $1+\delta+\epsilon$ belongs to the misfit. But then the move $1+\delta \epsilon \rightarrow s$ moves the misfit to a 6-tuple with total value 21 and hence (as we see in a moment) must be a hexad and hence this is a decreasing move! So, finally, there are no misfits!
Hence, from every non-hexad pile of total value at least 21 we have a legal move to a hexad. Because the other player cannot move from an hexad to another hexad we are done with our strategy provided we can show (a) that the total value of any hexad is at least 21 and (b) that ALL 6-piles of total value 21 are hexads. As there are only 132 hexads it is easy enough to have their sum-distribution. Here it is
That is, (a) is proved by inspection and we see that there are 11 hexads of sum 21 (the light hexads in Conway-speak) and there are only 11 ways to get 21 as a sum of 6 distinct numbers from {0,1,..,11} so (b) follows. Btw. the obvious symmetry of the sum-distribution is another consequence of the duality t->11-t discussed briefly at the end of part 2.
Clearly, I’d rather have conceptual proofs for all these facts and briefly tried my hand. Luckily I did spot the following phrase on page 326 of Conway-Sloane (discussing the above distribution) :
“It will not be easy to explain all the above observations. They are certainly connected with hyperbolic geometry and with the ‘hole’ structure of the Leech lattice.”
So, I’d better leave it at this…
References
Joseph Kahane and Alexander J. Ryba, “The hexad game
John H. Conway and N. J.A. Sloane, “Sphere packings, Lattices and Groups” chp. 11 ‘The Golay codes and the Mathieu groups’
(continued from part one). Take twelve cards and give them values 0,1,2,…,11 (for example, take the jack to have value 11 and the queen to have value 0). The hexads are 6-tuples of cards having the following properties. When we star their values by the scheme on the left below and write a 0 below a column if it has just one star at the first row or two stars on rows two and three (a + if the unique star is at the first row or two stars in the other columns, and a – if the unique star in on the second row or two stars in rows one and two) or a ? if the column has 3 or 0 stars, we get a tetracodeword where we are allowed to replace a ? by any digit. Moreover, we want that the stars are NOT distributed over the four columns such that all of the possible outcomes 0,1,2,3 appear once. For example, the card-pile { queen, 3, 4, 7, 9, jack } is an hexad as is indicated on the right below and has column-distributions (1,1,2,2).
$\begin{array}{|c|ccc|} \hline 6 & 3 & 0 & 9 \\ 5 & 2 & 7 & 10 \\ 4 & 1 & 8 & 11 \\ \hline & & & \end{array}$ $\begin{array}{|c|ccc|} \hline & \ast & \ast & \ast \\ & & \ast & \\ \ast & & & \ast \\ \hline – & 0 & – & + \end{array}$
The hexads form a Steiner-system S(5,6,12), meaning that every 5-pile of cards is part of a unique hexad. The permutations on these twelve cards, having the property that they send every hexad to another hexad, form the sporadic simple group $M_{12}$, the _Mathieu group_ of order 95040. For now, we assume these facts and deduce from them the Conway-Ryba winning strategy for Mathieu’s blackjack : the hexads are exactly the winning positions and from a non-hexad pile of total value at least 21 there is always a legal (that is, total value decreasing) move to an hexad by replacing one card in the pile by a card from the complement.
It seems that the first proof of this strategy consisted in calculating the Grundy values of all 905 legal positions in Mathieu’s blackjack. Later Joseph Kahane and Alex Ryba gave a more conceptual proof, that we will try to understand.
Take a non-hexad 6-pile such that the total value of its cards is at least 21, then removing any one of the six cards gives a 5-pile and is by the Steiner-property contained in a unique hexad. Hence we get 6 different hexads replacing one card from the non-hexad pile by a card not contained in it. We claim that at least one of these operations is a legal move, meaning that the total value of the cards decreases. Let us call a counterexample a misfit and record some of its properties until we can prove its non-existence.
A misfit is a non-hexad with total value at least 21 such that all 6 hexads, obtained from it by replacing one card by a card from its complement, have greater total value
A misfit must contain the queen-card. If not, we could get an hexad replacing one misfit-card (value > 0) by the queen (value zero) so this would be a legal move. Further, the misfit cannot contain the jack-card for otherwise replacing it by a lower-valued card to obtain an hexad is a legal move.
A misfit contains at least three cards from {queen,1,2,3,4}. If not, three of these cards are the replacements of misfit-cards to get an hexad, but then at least one of the replaced cards has a greater value than the replacement, giving a legal move to an hexad.
A misfit contains more than three cards from {queen=0, 1,2,3,4}. Assume there are precisely three $\{ c_1,c_2,c_3 \}$ from this set, then the complement of the misfit in the hexad {queen,1,2,3,4,jack} consists of three elements $\{ d_1,d_2,d_3 \}$ (a misfit cannot contain the jack). The two leftmost columns of the value-scheme (left above) form the hexad {1,2,3,4,5,6} and because the Mathieu group acts 5-transitively there is an element of $M_{12}$ taking $\{ 0,1,2,3,4,11 \} \rightarrow \{ 1,2,3,4,5,6 \}$ and we may even assume that it takes $\{ c_1,c_2,c_3 \} \rightarrow \{ 4,5,6 \}$. But then, in the new value-scheme (determined by that $M_{12}$-element) the two leftmost columns of the misfit look like
$\begin{array}{|c|ccc|} \hline \ast & . & ? & ? \\ \ast & . & ? & ? \\ \ast & . & ? & ? \\ \hline ? & ? & & \end{array}$
and the column-distribution of the misfit must be either (3,0,2,1) or (3,0,1,2) (it cannot be (3,0,3,0) or (3,0,0,3) otherwise the (image of the) misfit would be an hexad). Let {i,j} be the two misfit-values in the 2-starred column. Replacing either of them to get an hexad must have the replacement lying in the second column (in order to get a valid column distribution (3,1,1,1)). Now, the second column consists of two small values (from {0,1,2,3,4}) and the large jack-value (11). So, at least one of {i,j} is replaced by a smaller valued card to get an hexad, which cannot happen by the misfit-property.
Now, if the misfit shares four cards with {queen,1,2,3,4} then it cannot contain the 10-card. Otherwise, the replacement to get an hexad of the 10-card must be the 11-card (by the misfit-property) but then there would be another hexads containing five cards from {queen,0,1,2,3,jack} which cannot happen by the Steiner-property. Right, let’s summarize what we know so far about our misfit. Its value-scheme looks like
$\begin{array}{|c|ccc|} \hline 6 & III & \ast & 9 \\ 5 & II & 7 & . \\ IV & I & 8 & . \\ \hline & & & \end{array}$ and it must contain three of the four Romans. At this point Kahane and Ryba claim that the two remaining cards (apart from the queen and the three romans) must be such that there is exactly one from {5,6} and exactly one from {7,8,9}. They argue this follows from duality where the dual pile of a card-pile $\{ x_1,x_2,\ldots,x_6 \}$ is the pile $\{ 11-x_1,11-x_2,\ldots,11-x_6 \}$. This duality acts on the hexads as the permutation $~(0,11)(1,10)(2,9)(3,8)(4,7)(5,6) \in M_{12}$. Still, it is unclear to me how they deduce from it the above claim (lines 13-15 of page 4 of their paper). I’d better have some coffee and work around this (to be continued…)
If you want to play around a bit with hexads and the blackjack game, you’d better first download SAGE (if you haven’t done so already) and then get David Joyner’s hexad.sage file and put it in a folder under your sage installation (David suggests ‘spam’ himself…). You can load the routines into sage by typing from the sage-prompt attach ‘spam/hexad.sage’. Now, you can find the hexad from a 5-pile via the command find_hexad([a1,a2,a3,a4,a5],minimog_shuffle) and you can get the winning move for a blackjack-position via blackjack_move([a1,a2,a3,a4,a5,a6],minimog_shuffle). More details are in the Joyner-Casey(Luers) paper referenced last time.
Reference
Joseph Kahane and Alexander J. Ryba, ‘The hexad game
|
{}
|
# A sample of gas is collected over water at a temperature of 35.0°C when the barometric pressure reading is 742.0 torr. What is the partial pressure of the dry gas?
This site tells me that vapour pressure of water at $35.0$ ""^@C $=$ $42.2 \cdot m m \cdot H g$.
${P}_{\text{collected}}$ $=$ ${P}_{\text{gas" + P_"SVP}}$, ${P}_{\text{SVP}}$ is given above, and is called the saturated vapour pressure. This should have been quoted with the question.
Thus ${P}_{\text{gas}} \cong 700 \cdot m m \cdot H g$.
|
{}
|
Recherche avancée
Feuilleter ORBi par Le projet ORBi Le mouvement de l'Open Access
ORBi est un projet de
Résultats 1 à 20 sur un total de 299. Recherche : (author:Gillon, Michaël)OR(U193465) Tri : Titre Auteur Date de publication Filtre : Tous les types de documents Périodiques scientifiques - Article - Communication brève - Compte rendu et recension critique d'ouvrage - Lettre à l’éditeur - N° entier - AutreOuvrages - Ouvrage publié en tant qu'auteur, traducteur, etc. - Ouvrage collectif publié en tant qu'éditeur ou directeurParties d’ouvrages - Contribution à des ouvrages collectifs - Contribution à des encyclopédies, dictionnaires... - Préface, postface, glossaire...Colloques et congrès scientifiques - Communication orale non publiée/Abstract - Communication publiée dans un ouvrage - Communication publiée dans un périodique - Communication posterConférence scientifique dans des universités ou centres de rechercheRapports - Rapport d’expertise - Rapport de recherche interne - Rapport de recherche externe - AutreMémoires et thèses - Mémoire de DEA/DES/DEC - Mémoire de licence/master - Thèse de doctorat - Thèse d’agrégation de l’enseignement supérieurDocuments pédagogiques - Notes de cours et syllabus - AutreBrevetDocuments cartographiques - Document unique - Document dans une autre publicationDéveloppements informatiques - Base de données textuelles, factuelles ou bibliographiques - Logiciel - AutreE-prints/Working papers - Diffusé en premier sur ORBi - Diffusé à l'origine sur un autre siteAllocutions et communications diverses - Article grand public - Conférence donnée hors contexte académique - Allocution et discours - Autre Exoplanetary TransitsGillon, Michaël Conférence scientifique (2017, September 15)Exoplanetary TransitsVisualisation de la référence détaillée: 24 (3 ULiège) The EBLM project. III. A Saturn-size low-mass star at the hydrogen-burning limitvon Boetticher, Alexander; Triaud, Amaury H. M. J.; Queloz, Didier et alin Astronomy and Astrophysics (2017), 604We report the discovery of an eclipsing binary system with mass-ratio q ˜ 0.07. After identifying a periodic photometric signal received by WASP, we obtained CORALIE spectroscopic radial velocities and ... [plus ▼]We report the discovery of an eclipsing binary system with mass-ratio q ˜ 0.07. After identifying a periodic photometric signal received by WASP, we obtained CORALIE spectroscopic radial velocities and follow-up light curves with the Euler and TRAPPIST telescopes. From a joint fit of these data we determine that EBLM J0555-57 consists of a sun-like primary star that is eclipsed by a low-mass companion, on a weakly eccentric 7.8-day orbit. Using a mass estimate for the primary star derived from stellar models, we determine a companion mass of 85 ± 4 M[SUB]Jup[/SUB] (0.081 M[SUB]⊙[/SUB]) and a radius of 0.84[SUP]+ 0.14[/SUP][SUB]-0.04[/SUB]R[SUB]Jup[/SUB] (0.084 R[SUB]⊙[/SUB]) that is comparable to that of Saturn. EBLM J0555-57Ab has a surface gravity log g[SUB]2[/SUB] =5.50[SUP]+ 0.03[/SUP][SUB]-0.13[/SUB] and is one of the densest non-stellar-remnant objects currently known. These measurements are consistent with models of low-mass stars. The photometry tables and radial velocities are only available at the CDS and on demand via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/L6 [moins ▲]Visualisation de la référence détaillée: 18 (2 ULiège) Probing the atmosphere of a sub-Jovian planet orbiting a cool dwarfSedaghati, Elyar; Boffin, Henri M. J.; Delrez, Laetitia et alin Monthly Notices of the Royal Astronomical Society (2017), 468We derive the 0.01 $\mu$m binned transmission spectrum, between 0.74 and 1.0 $\mu$m, of WASP-80b from low resolution spectra obtained with the FORS2 instrument attached to ESO's Very Large Telescope. The ... [plus ▼]We derive the 0.01 $\mu$m binned transmission spectrum, between 0.74 and 1.0 $\mu$m, of WASP-80b from low resolution spectra obtained with the FORS2 instrument attached to ESO's Very Large Telescope. The combination of the fact that WASP-80 is an active star, together with instrumental and telluric factors, introduces correlated noise in the observed transit light curves, which we treat quantitatively using Gaussian Processes. Comparison of our results together with those from previous studies, to theoretically calculated models reveals an equilibrium temperature in agreement with the previously measured value of 825K, and a sub-solar metallicity, as well as an atmosphere depleted of molecular species with absorption bands in the IR ($\gg 5\sigma$). Our transmission spectrum alone shows evidence for additional absorption from the potassium core and wing, whereby its presence is detected from analysis of narrow 0.003 $\mu$m bin light curves ($\gg 5\sigma$). Further observations with visible and near-UV filters will be required to expand this spectrum and provide more in-depth knowledge of the atmosphere. These detections are only made possible through an instrument-dependent baseline model and a careful analysis of systematics in the data. [moins ▲]Visualisation de la référence détaillée: 13 (0 ULiège) Ground-based monitoring of comet 67P/Churyumov–Gerasimenko gas activity throughout the Rosetta missionOpitom, C.; Snodgrass, C.; Fitzsimmons, A. et alin Monthly Notices of the Royal Astronomical Society (2017), 469Simultaneously to the ESA Rosetta mission, a world-wide ground-based campaign provided measurements of the large scale activity of comet 67P/Churyumov-Gerasimenko through measurement of optically active ... [plus ▼]Simultaneously to the ESA Rosetta mission, a world-wide ground-based campaign provided measurements of the large scale activity of comet 67P/Churyumov-Gerasimenko through measurement of optically active gas species and imaging of the overall dust coma. We present more than 2 yr of observations performed with the FORS2 low-resolution spectrograph at the VLT, TRAPPIST and ACAM at the WHT. We focus on the evolution of the CN production as a tracer of the comet activity. We find that it is asymmetric with respect to perihelion and different from that of the dust. The CN emission is detected for the first time at 1.34 au pre-perihelion and production rates then increase steeply to peak about 2 weeks after perihelion at (1.00 ± 0.10) × 10[SUP]25[/SUP] molecules s[SUP]-1[/SUP], while the post-perihelion decrease is more shallow. The evolution of the comet activity is strongly influenced by seasonal effects with enhanced CN production when the Southern hemisphere is illuminated. [moins ▲]Visualisation de la référence détaillée: 26 (3 ULiège) Monitoring of comets activity and composition with the TRAPPIST-North telescopeMoulane, Youssef ; Benkhaldoun, Zouhair; Jehin, Emmanuel et alin Journal of Physics: Conference Series (2017, July), 869TRAPPIST-North (TRAnsiting Planets and PlanetesImals Small Telescope) is a 60-cm robotic telescope that was installed in May 2016 at the Oukaimeden Observatory. The project is led by the University of ... [plus ▼]TRAPPIST-North (TRAnsiting Planets and PlanetesImals Small Telescope) is a 60-cm robotic telescope that was installed in May 2016 at the Oukaimeden Observatory. The project is led by the University of Liège (Belgium) and the Caddi Ayad University of Marrakech (Morocco). This telescope is a twin of the TRAPPIST-South telescope, which was installed at the ESO La Silla Observatory in 2010. The TRAPPIST telescopes are dedicated to the detection and characterization of planets orbiting stars other than our Sun (exoplanets) and the study of comets and other small bodies in our solar system. For the comets research, these telescopes have very sensitive CCD cameras with complete sets of narrow band filters to measure the production rates of several gases (OH, NH, CN, C3 and C2) and the dust. With TRAPPIST-North we can also observe comets that would not be visible in the southern hemisphere. Therfore, with these two telescopes, we can now observe continuously the comets around their orbit. We project to study individually the evolution of the activity, chemical composition, dust properties, and coma morphology of several comets per year and of different origins (New comets and Jupiter Family comets) over a wide range of heliocentric distances, and on both sides of perihelion. We measure the production rates of each daughter molecules using a Haser model, in addition to the Afρ parameter to estimate the dust production in the coma. In this work, we present the first measurements of the production rates of comet C/2013 X1 (PANSTARRS) observed with TN in June 2016, and the measurements of comet C/2013 V5 (Oukaimeden) observed in 2014 with TRAPPIST-South. [moins ▲]Visualisation de la référence détaillée: 19 (3 ULiège) A seven-planet resonant chain in TRAPPIST-1Luger, Rodrigo; Sestovic, Marko; Kruse, Ethan et alin Nature Astronomy (2017), 1The TRAPPIST-1 system is the first transiting planet system found orbiting an ultracool dwarf star[SUP] 1 [/SUP]. At least seven planets similar in radius to Earth were previously found to transit this ... [plus ▼]The TRAPPIST-1 system is the first transiting planet system found orbiting an ultracool dwarf star[SUP] 1 [/SUP]. At least seven planets similar in radius to Earth were previously found to transit this host star[SUP] 2 [/SUP]. Subsequently, TRAPPIST-1 was observed as part of the K2 mission and, with these new data, we report the measurement of an 18.77 day orbital period for the outermost transiting planet, TRAPPIST-1 h, which was previously unconstrained. This value matches our theoretical expectations based on Laplace relations[SUP] 3 [/SUP] and places TRAPPIST-1 h as the seventh member of a complex chain, with three-body resonances linking every member. We find that TRAPPIST-1 h has a radius of 0.752 R [SUB]⊕[/SUB] and an equilibrium temperature of 173 K. We have also measured the rotational period of the star to be 3.3 days and detected a number of flares consistent with a low-activity, middle-aged, late M dwarf. [moins ▲]Visualisation de la référence détaillée: 92 (6 ULiège) The Spitzer search for the transits of HARPS low-mass planets. II. Null results for 19 planetsGillon, Michaël ; Demory, B.-O.; Lovis, C. et alin Astronomy and Astrophysics (2017), 601Short-period super-Earths and Neptunes are now known to be very frequent around solar-type stars. Improving our understanding of these mysterious planets requires the detection of a significant sample of ... [plus ▼]Short-period super-Earths and Neptunes are now known to be very frequent around solar-type stars. Improving our understanding of these mysterious planets requires the detection of a significant sample of objects suitable for detailed characterization. Searching for the transits of the low-mass planets detected by Doppler surveys is a straightforward way to achieve this goal. Indeed, Doppler surveys target the most nearby main-sequence stars, they regularly detect close-in low-mass planets with significant transit probability, and their radial velocity data constrain strongly the ephemeris of possible transits. In this context, we initiated in 2010 an ambitious Spitzer multi-Cycle transit search project that targeted 25 low-mass planets detected by radial velocity, focusing mainly on the shortest-period planets detected by the HARPS spectrograph. We report here null results for 19 targets of the project. For 16 planets out of 19, a transiting configuration is strongly disfavored or firmly rejected by our data for most planetary compositions. We derive a posterior probability of 83% that none of the probed 19 planets transits (for a prior probability of 22%), which still leaves a significant probability of 17% that at least one of them does transit. Globally, our Spitzer project revealed or confirmed transits for three of its 25 targeted planets, and discarded or disfavored the transiting nature of 20 of them. Our light curves demonstrate for Warm Spitzer excellent photometric precisions: for 14 targets out of 19, we were able to reach standard deviations that were better than 50 ppm per 30 min intervals. Combined with its Earth-trailing orbit, which makes it capable of pointing any star in the sky and to monitor it continuously for days, this work confirms Spitzer as an optimal instrument to detect sub-mmag-deep transits on the bright nearby stars targeted by Doppler surveys. The photometric and radial velocity time series used in this work are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/601/A117 [moins ▲]Visualisation de la référence détaillée: 18 (1 ULiège) 3D shape of asteroid (6)~Hebe from VLT/SPHERE imaging: Implications for the origin of ordinary H chondritesMarsset, M.; Carry, B.; Dumas, C. et alin Astronomy and Astrophysics (2017), 604Context. The high-angular-resolution capability of the new-generation ground-based adaptive-optics camera SPHERE at ESO VLT allows us to assess, for the very first time, the cratering record of medium ... [plus ▼]Context. The high-angular-resolution capability of the new-generation ground-based adaptive-optics camera SPHERE at ESO VLT allows us to assess, for the very first time, the cratering record of medium-sized (D~100-200 km) asteroids from the ground, opening the prospect of a new era of investigation of the asteroid belt's collisional history. Aims. We investigate here the collisional history of asteroid (6) Hebe and challenge the idea that Hebe may be the parent body of ordinary H chondrites, the most common type of meteorites found on Earth (~34% of the falls). Methods. We observed Hebe with SPHERE as part of the science verification of the instrument. Combined with earlier adaptive-optics images and optical light curves, we model the spin and three-dimensional (3D) shape of Hebe and check the consistency of the derived model against available stellar occultations and thermal measurements. Results. Our 3D shape model fits the images with sub-pixel residuals and the light curves to 0.02 mag. The rotation period (7.274 47 h), spin (343 deg,+47 deg), and volume-equivalent diameter (193 +/- 6km) are consistent with previous determinations and thermophysical modeling. Hebe's inferred density is 3.48 +/- 0.64 g.cm-3 , in agreement with an intact interior based on its H-chondrite composition. Using the 3D shape model to derive the volume of the largest depression (likely impact crater), it appears that the latter is significantly smaller than the total volume of close-by S-type H-chondrite-like asteroid families. Conclusions. Our results imply that (6) Hebe is not the most likely source of H chondrites. Over the coming years, our team will collect similar high-precision shape measurements with VLT/SPHERE for ~40 asteroids covering the main compositional classes, thus providing an unprecedented dataset to investigate the origin and collisional evolution of the asteroid belt. [moins ▲]Visualisation de la référence détaillée: 11 (3 ULiège) The HARPS search for southern extra-solar planets. XXXVI. Eight HARPS multi-planet systems hosting 20 super-Earth and Neptune-mass companionsUdry, S.; Dumusque, X.; Lovis, C. et alin ArXiv e-prints (2017), 1705We present radial-velocity measurement of eight stars observed with the HARPS Echelle spectrograph mounted on the 3.6-m telescope in La Silla (ESO, Chile). Data span more than ten years and highlight the ... [plus ▼]We present radial-velocity measurement of eight stars observed with the HARPS Echelle spectrograph mounted on the 3.6-m telescope in La Silla (ESO, Chile). Data span more than ten years and highlight the long-term stability of the instrument. We search for potential planets orbiting HD20003, HD20781, HD21693, HD31527, HD45184, HD51608, HD134060 and HD136352 to increase the number of known planetary systems and thus better constrain exoplanet statistics. After a preliminary phase looking for signals using generalized Lomb-Scargle periodograms, we perform a careful analysis of all signals to separate \emph{bona-fide} planets from spurious signals induced by stellar activity and instrumental systematics. We finally secure the detection of all planets using the efficient MCMC available on the Data and Analysis Center for Exoplanets (DACE web-platform), using model comparison whenever necessary. In total, we report the detection of twenty new super-Earth to Neptune-mass planets, with minimum masses ranging from 2 to 30 M$_{\rm Earth}$, and periods ranging from 3 to 1300 days. By including CORALIE and HARPS measurements of HD20782 to the already published data, we also improve the characterization of the extremely eccentric Jupiter orbiting this host. [moins ▲]Visualisation de la référence détaillée: 21 (2 ULiège) Peculiar architectures for the WASP-53 and WASP-81 planet-hosting systems★Triaud, Amaury H. M. J.; Neveu-VanMalle, Marion; Lendl, Monika et alin Monthly Notices of the Royal Astronomical Society (2017), 467We report the detection of two new systems containing transiting planets. Both were identified by WASP as worthy transiting planet candidates. Radial velocity observations quickly verified that the ... [plus ▼]We report the detection of two new systems containing transiting planets. Both were identified by WASP as worthy transiting planet candidates. Radial velocity observations quickly verified that the photometric signals were indeed produced by two transiting hot Jupiters. Our observations also show the presence of additional Doppler signals. In addition to short-period hot Jupiters, we find that the WASP-53 and WASP-81 systems also host brown dwarfs, on fairly eccentric orbits with semimajor axes of a few astronomical units. WASP-53c is over 16 M[SUB]Jup[/SUB]sin i[SUB]c[/SUB] and WASP-81c is 57 M[SUB]Jup[/SUB]sin i[SUB]c[/SUB]. The presence of these tight, massive companions restricts theories of how the inner planets were assembled. We propose two alternative interpretations: the formation of the hot Jupiters within the snow line or the late dynamical arrival of the brown dwarfs after disc dispersal. We also attempted to measure the Rossiter-McLaughlin effect for both hot Jupiters. In the case of WASP-81b, we fail to detect a signal. For WASP-53b, we find that the planet is aligned with respect to the stellar spin axis. In addition we explore the prospect of transit-timing variations, and of using Gaia's astrometry to measure the true masses of both brown dwarfs and also their relative inclination with respect to the inner transiting hot Jupiters. [moins ▲]Visualisation de la référence détaillée: 15 (1 ULiège) Study of the plutino object (208996) 2003 AZ84 from stellar occultations: size, shape and topographic featuresDias-Oliveira, A.; Sicardy, B.; Ortiz, J. L. et alin The Astronomical Journal (2017), 154(1), 13We present results derived from four stellar occultations by the plutino object (208996) 2003~AZ$_{84}$, detected at January 8, 2011 (single-chord event), February 3, 2012 (multi-chord), December 2, 2013 ... [plus ▼]We present results derived from four stellar occultations by the plutino object (208996) 2003~AZ$_{84}$, detected at January 8, 2011 (single-chord event), February 3, 2012 (multi-chord), December 2, 2013 (single-chord) and November 15, 2014 (multi-chord). Our observations rule out an oblate spheroid solution for 2003~AZ$_{84}$'s shape. Instead, assuming hydrostatic equilibrium, we find that a Jacobi triaxial solution with semi axes $(470 \pm 20) \times (383 \pm 10) \times (245 \pm 8)$~km % axis ratios $b/a= 0.82 \pm 0.05$ and $c/a= 0.52 \pm 0.02$, can better account for all our occultation observations. Combining these dimensions with the rotation period of the body (6.75~h) and the amplitude of its rotation light curve, we derive a density $\rho=0.87 \pm 0.01$~g~cm$^{-3}$ a geometric albedo $p_V= 0.097 \pm 0.009$. A grazing chord observed during the 2014 occultation reveals a topographic feature along 2003~AZ$_{84}$'s limb, that can be interpreted as an abrupt chasm of width $\sim 23$~km and depth $> 8$~km or a smooth depression of width $\sim 80$~km and depth $\sim 13$~km (or an intermediate feature between those two extremes). [moins ▲]Visualisation de la référence détaillée: 13 (4 ULiège) The 67P/Churyumov-Gerasimenko observation campaign in support of the Rosetta missionSnodgrass, C.; A'Hearn, M. F.; Aceituno, F. et alin Philosophical Transactions : Mathematical, Physical & Engineering Sciences (2017), 375We present a summary of the campaign of remote observations that supported the European Space Agency's Rosetta mission. Telescopes across the globe (and in space) followed comet 67P/Churyumov-Gerasimenko ... [plus ▼]We present a summary of the campaign of remote observations that supported the European Space Agency's Rosetta mission. Telescopes across the globe (and in space) followed comet 67P/Churyumov-Gerasimenko from before Rosetta's arrival until nearly the end of the mission in September 2016. These provided essential data for mission planning, large-scale context information for the coma and tails beyond the spacecraft and a way to directly compare 67P with other comets. The observations revealed 67P to be a relatively `well-behaved' comet, typical of Jupiter family comets and with activity patterns that repeat from orbit to orbit. Comparison between this large collection of telescopic observations and the in situ results from Rosetta will allow us to better understand comet coma chemistry and structure. This work is just beginning as the mission ends-in this paper, we present a summary of the ground-based observations and early results, and point to many questions that will be addressed in future studies. This article is part of the themed issue 'Cometary science after Rosetta'. [moins ▲]Visualisation de la référence détaillée: 34 (2 ULiège) WASP-167b/KELT-13b: Joint discovery of a hot Jupiter transiting a rapidly-rotating F1V starTemple, L. Y.; Hellier, C.; Albrow, M. D. et alin Monthly Notices of the Royal Astronomical Society (2017), 471(3), 2743-2752We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was ... [plus ▼]We report the joint WASP/KELT discovery of WASP-167b/KELT-13b, a transiting hot Jupiter with a 2.02-d orbit around a $V$ = 10.5, F1V star with [Fe/H] = 0.1 $\pm$ 0.1. The 1.5 R$_{\rm Jup}$ planet was confirmed by Doppler tomography of the stellar line profiles during transit. We place a limit of $<$ 8 M$_{\rm Jup}$ on its mass. The planet is in a retrograde orbit with a sky-projected spin-orbit angle of $\lambda = -165^{\circ} \pm 5^{\circ}$. This is in agreement with the known tendency for orbits around hotter stars to be more likely to be misaligned. WASP-167/KELT-13 is one of the few systems where the stellar rotation period is less than the planetary orbital period. We find evidence of non-radial stellar pulsations in the host star, making it a $\delta$-Scuti or $\gamma$-Dor variable. The similarity to WASP-33, a previously known hot-Jupiter host with pulsations, adds to the suggestion that close-in planets might be able to excite stellar pulsations. [moins ▲]Visualisation de la référence détaillée: 14 (4 ULiège) Reconnaissance of the TRAPPIST-1 exoplanet system in the Lyman-α lineBourrier, V.; Ehrenreich, D.; Wheatley, P. J. et alin Astronomy and Astrophysics (2017), 599The TRAPPIST-1 system offers the opportunity to characterize terrestrial, potentially habitable planets orbiting a nearby ultracool dwarf star. We performed a four-orbit reconnaissance with the Space ... [plus ▼]The TRAPPIST-1 system offers the opportunity to characterize terrestrial, potentially habitable planets orbiting a nearby ultracool dwarf star. We performed a four-orbit reconnaissance with the Space Telescope Imaging Spectrograph onboard the Hubble Space Telescope to study the stellar emission at Lyman-α, to assess the presence of hydrogen exospheres around the two inner planets, and to determine their UV irradiation. We detect the Lyman-α line of TRAPPIST-1, making it the coldest exoplanet host star for which this line has been measured. We reconstruct the intrinsic line profile, showing that it lacks broad wings and is much fainter than expected from the stellar X-ray emission. TRAPPIST-1 has a similar X-ray emission as Proxima Cen but a much lower Ly-α emission. This suggests that TRAPPIST-1 chromosphere is only moderately active compared to its transition region and corona. We estimated the atmospheric mass loss rates for all planets, and found that despite a moderate extreme UV emission the total XUV irradiation could be strong enough to strip the atmospheres of the inner planets in a few billions years. We detect marginal flux decreases at the times of TRAPPIST-1b and c transits, which might originate from stellar activity, but could also hint at the presence of extended hydrogen exospheres. Understanding the origin of these Lyman-α variations will be crucial in assessing the atmospheric stability and potential habitability of the TRAPPIST-1 planets. [moins ▲]Visualisation de la référence détaillée: 21 (3 ULiège) Two massive rocky planets transiting a K-dwarf 6.5 parsecs awayGillon, Michaël ; Demory, Brice-Olivier; Van Grootel, Valérie et alin Nature Astronomy (2017), 1HD 219134 is a K-dwarf star at a distance of 6.5 parsecs around which several low-mass planets were recently discovered[SUP]1,2[/SUP]. The Spitzer Space Telescope detected a transit of the innermost of ... [plus ▼]HD 219134 is a K-dwarf star at a distance of 6.5 parsecs around which several low-mass planets were recently discovered[SUP]1,2[/SUP]. The Spitzer Space Telescope detected a transit of the innermost of these planets, HD 219134 b, whose mass and radius (4.5 M[SUB]⊕[/SUB] and 1.6 R[SUB]⊕[/SUB] respectively) are consistent with a rocky composition[SUP]1[/SUP]. Here, we report new high-precision time-series photometry of the star acquired with Spitzer revealing that the second innermost planet of the system, HD 219134c, is also transiting. A global analysis of the Spitzer transit light curves and the most up-to-date HARPS-N velocity data set yields mass and radius estimations of 4.74 ± 0.19 M[SUB]⊕[/SUB] and 1.602 ± 0.055 R[SUB]⊕[/SUB] for HD 219134 b, and of 4.36 ± 0.22 M[SUB]⊕[/SUB] and 1.511 ± 0.047 R[SUB]⊕[/SUB] for HD 219134 c. These values suggest rocky compositions for both planets. Thanks to the proximity and the small size of their host star (0.778 ± 0.005 R[SUB]⊙[/SUB])[SUP]3[/SUP], these two transiting exoplanets — the nearest to the Earth yet found — are well suited for a detailed characterization (for example, precision of a few per cent on mass and radius, and constraints on the atmospheric properties) that could give important constraints on the nature and formation mechanism of the ubiquitous short-period planets of a few Earth masses. [moins ▲]Visualisation de la référence détaillée: 51 (7 ULiège) WASP-South transiting exoplanets: WASP-130b, WASP-131b, WASP-132b, WASP-139b, WASP-140b, WASP-141b & WASP-142bHellier, Coel; Anderson, D. R.; Collier Cameron, A. et alin Monthly Notices of the Royal Astronomical Society (2017), 465We describe seven new exoplanets transiting stars of V = 10.1 to 12.4. WASP-130b is a "warm Jupiter" having an orbital period of 11.6 d, the longest yet found by WASP. It transits a V = 11.1, G6 star with ... [plus ▼]We describe seven new exoplanets transiting stars of V = 10.1 to 12.4. WASP-130b is a "warm Jupiter" having an orbital period of 11.6 d, the longest yet found by WASP. It transits a V = 11.1, G6 star with [Fe/H] = +0.26. Warm Jupiters tend to have smaller radii than hot Jupiters, and WASP-130b is in line with this trend (1.23 Mjup; 0.89 Rjup). WASP-131b is a bloated Saturn-mass planet (0.27 Mjup; 1.22 Rjup). Its large scale height coupled with the V = 10.1 brightness of its host star make the planet a good target for atmospheric characterisation. WASP-132b is among the least irradiated and coolest of WASP planets, being in a 7.1-d orbit around a K4 star. It has a low mass and a modest radius (0.41 Mjup; 0.87 Rjup). The V = 12.4, [Fe/H] = +0.22 star shows a possible rotational modulation at 33 d. WASP-139b is the lowest-mass planet yet found by WASP, at 0.12 Mjup and 0.80 Rjup. It is a "super-Neptune" akin to HATS-7b and HATS-8b. It orbits a V = 12.4, [Fe/H] = +0.20, K0 star. The star appears to be anomalously dense, akin to HAT-P-11. WASP-140b is a 2.4-Mjup planet in a 2.2-d orbit that is both eccentric (e = 0.047) and with a grazing transit (b = 0.93) The timescale for tidal circularisation is likely to be the lowest of all known eccentric hot Jupiters. The planet's radius is large (1.4 Rjup), but uncertain owing to the grazing transit. The host star is a V = 11.1, [Fe/H] = +0.12, K0 dwarf showing a prominent 10.4-d rotational modulation. The dynamics of this system are worthy of further investigation. WASP-141b is a typical hot Jupiter, being a 2.7 Mjup, 1.2 Rjup planet in a 3.3-d orbit around a V = 12.4, [Fe/H] = +0.29, F9 star. WASP-142b is a typical bloated hot Jupiter (0.84 Mjup, 1.53 Rjup) in a 2.1-d orbit around a V = 12.3, [Fe/H] = +0.26, F8 star. [moins ▲]Visualisation de la référence détaillée: 139 (6 ULiège) Seven temperate terrestrial planets around the nearby ultracool dwarf starGillon, Michaël ; Triaud, Amaury; Demory, Brice-Olivier et alin Nature (2017), 542One focus of modern astronomy is to detect temperate terrestrial exoplanets well-suited for atmospheric characterisation. A milestone was recently achieved with the detection of three Earth-sized planets ... [plus ▼]One focus of modern astronomy is to detect temperate terrestrial exoplanets well-suited for atmospheric characterisation. A milestone was recently achieved with the detection of three Earth-sized planets transiting (i.e. passing in front of) a star just 8% the mass of the Sun 12 parsecs away. Indeed, the transiting configuration of these planets combined with the Jupiter-like size of their host star - named TRAPPIST-1 - makes possible indepth studies of their atmospheric properties with current and future astronomical facilities. Here we report the results of an intensive photometric monitoring campaign of that star from the ground and with the Spitzer Space Telescope. Our observations reveal that at least seven planets with sizes and masses similar to the Earth revolve around TRAPPIST-1. The six inner planets form a near-resonant chain such that their orbital periods (1.51, 2.42, 4.04, 6.06, 9.21, 12.35 days) are near ratios of small integers. This architecture suggests that the planets formed farther from the star and migrated inward. The seven planets have equilibrium temperatures low enough to make possible liquid water on their surfaces. [moins ▲]Visualisation de la référence détaillée: 182 (29 ULiège) Strong XUV irradiation of the Earth-sized exoplanets orbiting the ultracool dwarf TRAPPIST-1Wheatley, Peter J.; Louden, Tom; Bourrier, Vincent et alin Monthly Notices of the Royal Astronomical Society (2017), 465We present an XMM-Newton X-ray observation of TRAPPIST-1, which is an ultracool dwarf star recently discovered to host three transiting and temperate Earth-sized planets. We find the star is a relatively ... [plus ▼]We present an XMM-Newton X-ray observation of TRAPPIST-1, which is an ultracool dwarf star recently discovered to host three transiting and temperate Earth-sized planets. We find the star is a relatively strong and variable coronal X-ray source with an X-ray luminosity similar to that of the quiet Sun, despite its much lower bolometric luminosity. We find L_x/L_bol=2-4x10^-4, with the total XUV emission in the range L_xuv/L_bol=6-9x10^-4. Using a simple energy-limited model we show that the relatively close-in Earth-sized planets, which span the classical habitable zone of the star, are subject to sufficient X-ray and EUV irradiation to significantly alter their primary and perhaps secondary atmospheres. Understanding whether this high-energy irradiation makes the planets more or less habitable is a complex question, but our measured fluxes will be an important input to the necessary models of atmospheric evolution. [moins ▲]Visualisation de la référence détaillée: 79 (1 ULiège) Searching for Rapid Orbital Decay of WASP-18bWilkins, Ashlee N.; Delrez, Laetitia; Barker, Adrian J. et alin Astrophysical Journal Letters (2017), 836The WASP-18 system, with its massive and extremely close-in planet, WASP-18b (M [SUB] p [/SUB] = 10.3M [SUB] J [/SUB], a = 0.02 au, P = 22.6 hr), is one of the best-known exoplanet laboratories to ... [plus ▼]The WASP-18 system, with its massive and extremely close-in planet, WASP-18b (M [SUB] p [/SUB] = 10.3M [SUB] J [/SUB], a = 0.02 au, P = 22.6 hr), is one of the best-known exoplanet laboratories to directly measure Q‧, the modified tidal quality factor and proxy for efficiency of tidal dissipation, of the host star. Previous analysis predicted a rapid orbital decay of the planet toward its host star that should be measurable on the timescale of a few years, if the star is as dissipative as is inferred from the circularization of close-in solar-type binary stars. We have compiled published transit and secondary eclipse timing (as observed by WASP, TRAPPIST, and Spitzer) with more recent unpublished light curves (as observed by TRAPPIST and Hubble Space Telescope) with coverage spanning nine years. We find no signature of a rapid decay. We conclude that the absence of rapid orbital decay most likely derives from Q‧ being larger than was inferred from solar-type stars and find that Q‧ ≥ 1 × 10[SUP]6[/SUP], at 95% confidence; this supports previous work suggesting that F stars, with their convective cores and thin convective envelopes, are significantly less tidally dissipative than solar-type stars, with radiative cores and large convective envelopes. [moins ▲]Visualisation de la référence détaillée: 31 (1 ULiège) First limits on the occurrence rate of short-period planets orbiting brown dwarfsHe, Matthias Y.; Triaud, Amaury H. M. J.; Gillon, Michaël in Monthly Notices of the Royal Astronomical Society (2017), 464Planet formation theories predict a large but still undetected population of short-period terrestrial planets orbiting brown dwarfs. Should specimens of this population be discovered transiting relatively ... [plus ▼]Planet formation theories predict a large but still undetected population of short-period terrestrial planets orbiting brown dwarfs. Should specimens of this population be discovered transiting relatively bright and nearby brown dwarfs, the Jupiter-size and the low luminosity of their hosts would make them exquisite targets for detailed atmospheric characterisation with JWST and future ground-based facilities. The eventual discovery and detailed study of a significant sample of transiting terrestrial planets orbiting nearby brown dwarfs could prove to be useful not only for comparative exoplanetology but also for astrobiology, by bringing us key information on the physical requirements and timescale for the emergence of life. In this context, we present a search for transit-signals in archival time-series photometry acquired by the Spitzer Space Telescope for a sample of 44 nearby brown dwarfs. While these 44 targets were not particularly selected for their brightness, the high precision of their Spitzer light curves allows us to reach sensitivities below Earth-sized planets for 75% of the sample and down to Europa-sized planets on the brighter targets. We could not identify any unambiguous planetary signal. Instead, we could compute the first limits on the presence of planets on close-in orbits. We find that within a 1.28 day orbit, the occurrence rate of planets with a radius between 0.75 and 3.25 R$_\oplus$ is {\eta} < 67 $\pm$ 1%. For planets with radii between 0.75 and 1.25 R$_\oplus$, we place a 95% confident upper limit of {\eta} < 87 $\pm$ 3%. If we assume an occurrence rate of {\eta} = 27% for these planets with radii between 0.75 and 1.25 R$_\oplus$, as the discoveries of the Kepler-42b and TRAPPIST-1b systems would suggest, we estimate that 175 brown dwarfs need to be monitored in order to guarantee (95%) at least one detection. [moins ▲]Visualisation de la référence détaillée: 49 (1 ULiège)
|
{}
|
# global section vector bundle
do non-zero global section always exist in a manifold $M$? If $M$ is compact I think they do because taking a partition of unity $\rho_{\alpha}$ subordinated to a finite covering, and defining local sections $s_{\alpha}$ in this finite covering I can take $$s:=\sum s_{\alpha} \rho_{\alpha}$$ Is this argument right? I guess it is not true for general $M$ thanks
-
Non-zero global section of what vector bundle exactly? It's certainly not true for every vector bundle - consider the Mobius bundle over the circle. (You can draw it in $\mathbb{R}^3$). The problem is that your partition of unity argument doesn't guarantee things won't cancel out at some points. – Jason DeVito Aug 28 '12 at 17:37
You do not need compactness nor even paracompactness (i.e. no partition of unity is necessary).
Take any nowhere zero continuous section of the vector bundle on an open trivializing subset $U$ for the vector bundle , multiply it by a continuous plateau function with compact support in $U$ and extend by zero to the whole manifold: this yields a non-identically zero continuous section of the vector bundle.
NB I have interpreted your question as asking for non-identically zero continuous sections:they always exist.
In general it is however impossible to find a nowhere zero section of an arbitrary vector bundle on an arbitrary manifold, as shown in other answers.
But sometimes it is possible: on a contractible manifold like $\mathbb R^n$ all vector bundles are trivial and thus they certainly admit of nowhere zero continuous sections.
-
No, every even dimensional sphere is a counter-example (cf. Hairy ball theorem).
Moreover, a closed orientable manifold admits a nowhere zero section of its tangent bundle iff its Euler class (and therefore also its Euler characteristic) vanishes.
-
Absolutely not! Take the tangent bundle over a manifold. A globally defined non-zero section is a non-singular vector field. The Poincaré-Hopf Theorem relates the topology of your surface with the existence of non-singular vector field. Consider, e.g. a sphere, there are no continuous, non-zero vector fields on the sphere. This is called the Hairy Ball Theorem. This is true of any compact, orientable surface with non-zero Euler Characteristic.
For further reading, take a look at Chern Classes (in the complex case) and Stiefel–Whitney classes (in the real case).
-
For instance take the tangent bundle of $\mathbb S^2$. Then you can't have a non-vanishing vector field defined globally.
|
{}
|
Tags:
1. Sep 2, 2015
### kau
Somehow I can't relate two things and confused over this.
What I understand when someone say that some spacetime has conformal boundary it means that I can write the metric conformally to some other metric where the coordinates are finite ..So it has boundary.
Now I just read something on Ads Conformal boundary which i can't understand much.
Consider (d+2) dim spacetime with two negative eigenvalue of the metric and imposde the following condition
$-x0^{2}+ \Sigma{ x^{i 2} }- x^{{d+1} ^{2}} = -L^{2}$ doing this give you the AdS space.
Now to understand the conformal boundary of this spacetime the logic that is put forward is the following:
For large $X^{M}$ this $-x0^{2}+ \Sigma{ x^{i 2} }- x^{{d+1} ^{2}} = -L^{2}$ The reason behind that I think since we have positive and negative sign. So in large value limit that contributes very small quantity which we can assume to be zero. (please correct me if I am wrong in this statement) . But the condition is it has to become -L^2 to be a part of AdS.. So in some sense it has to have some end somewhere.
And then they defined the boundary as the set of points which is on null geodesic originating from the centre of (d+2) dim spacetime and then end at null cone at infinity. Can someone explain this part ???
2. Sep 4, 2015
### samalkhaiat
Reading the above, it is not at all clear to me how much you know about the conformal group $C(1,n-1)$ and its global action.
1) Globally, the conformal group $C(1,n-1)$ acts not on the Minkowski space $\mbox{M}^{(1,n-1)}$ but on its conformal compactification $\mbox{M}_{c}^{(1,n-1)}$. This is an n-dimensional compact manifold isomorphic to $S^{n-1} \times S^{1} / \mathbb{Z}_{2}$.
2) The basic idea behind Ads/CFT is the fact that the conformal boundary of $\mbox{Ads}_{n+1}$ is a 2-fold covering of $\mbox{M}_{c}^{(1,n-1)}$, i.e. $\partial(\mbox{Ads}_{n+1}) = S^{n-1} \times S^{1}$.
If you understand where the above two points come from, then it is easy to understand the relation $\mbox{M}^{(1,n-1)} \cong \mbox{M}_{c}^{(1,n-1)} - \{ \mathcal{K}_{\infty} \}$, where $\{ \mathcal{K}_{\infty} \} \subset \mathbb{R}^{(2,n)}$ is the set of points at infinity (projective cone).
|
{}
|
## Entropy in real life
$\Delta S = \frac{q_{rev}}{T}$
juchung7
Posts: 44
Joined: Fri Sep 29, 2017 7:05 am
### Entropy in real life
This is kind of random, but I was just wondering, how does entropy increase through air conditioning? Or any kind of cold-increasing system?
Samira 2B
Posts: 38
Joined: Fri Sep 29, 2017 7:05 am
### Re: Entropy in real life
The entropy of a room may decrease with the cooling of the system. However, that does not mean that the whole system would have the entropy decrease. The heat particles just flow to a different object. DeltaS means the change in total entropy as the energy of contents of the two components change so the two components (I am assuming the room being air conditioned and its surroundings) would just have a change in entropy where the room's entropy would be decreasing.
|
{}
|
# dgl.unbatch¶
dgl.unbatch(graph)[source]
Return the list of graphs in this batch.
Parameters: graph (DGLGraph) – The batched graph. A list of DGLGraph objects whose attributes are obtained by partitioning the attributes of the graph. The length of the list is the same as the batch size of graph. list
Notes
Unbatching will break each field tensor of the batched graph into smaller partitions.
For simpler tasks such as node/edge state aggregation, try to use readout functions.
|
{}
|
## Thursday, November 30, 2006
### A Simple Turing Pattern
It all started back in September when Discovery Institute hack Casey Luskin attacked science blogger Chris Mooney, author of The Republican War on Science. Then a couple of weeks ago he went after science blogger Carl Zimmer, the fantastic writer whose work appears in the New York Times. Among the inanities he spewed was a defense of imperfection by comparing ID to a Ford Pinto.
"Was the Ford Pinto, with all its imperfections revealed in crash tests, not designed?"
This statement goes against the whole design argument; Is God a poor engineer who didn't heed Murphy's Law?
As ridiculous as that analogy is, Karmen at Chaotic Utopia glommed on to a doozy that all the other science bloggers had missed.
The article called evolution a "simple" process. In our experience, does a "simple" process generate the type of vast complexity found throughout biology?
I can see how this must've really irked Karmen since one of her regular features is Friday Fractals. You see, fractals are complex patterns generated from simple algorithms.
I'm afraid my fractals aren't quite as good as Karmen's since I made mine with the free software GIMP. The point remains that a fractal is a perfect example of a "complex design" that's generated by a few simple instructions.
The fun continues. Mark Chu-Carroll of Good Math, Bad Math expatiated upon the theme by bringing cellular automata (CA) into the mix.
For the simplest example of this, line up a bunch of little tiny machines in a row. Each machine has an LED on top. The LED can be either on, or off. Once every second, all of the CAs simultaneously look at their neighbors to the left and to the right, and decide whether to turn their LED on or off based on whether their neighbors lights are on or off. Here's a table describing one possible set of rules for the decision about whether to turn the LED on or off.
Current State Left Neighbor Right Neighbor New State
OnOnOnOff
OnOnOffOn
OnOffOnOn
OnOffOffOn
OffOnOnOn
OffOnOffOff
OffOffOnOn
OffOffOffOff
There you have two examples of "complex designs" spawned by "simple processes." Before I bring up a third, I should mention that MarkCC made a point that the above CA is turing complete. Nice segue since the next image will be a Turing Pattern. This "design" is so named because it derives from the principles layed out in the great mathematician Alan Turing's 1952 paper The Chemical Basis of Morphogenesis. In it, Turing demonstrates how "complex" natural patterns such as a leopard's stripes (or any embryological development) can be generated from simple chemical interactions. This ScienceDaily article describes it thus:
Based on purely theoretical considerations, Turing proposed a reaction and diffusion mechanism between two chemical substances. Using mathematics, he proved that such a simple system could produce a multitude of patterns. If one substance, the activator, produces itself and an inhibitor, while the inhibitor breaks down or inhibits the activator, a spontaneous distribution pattern of substances in the form of stripes and patches can be created. An essential requirement for this is that the inhibitor can be distributed faster through diffusion than the activator, thereby stabilizing the irregular distribution. This kind of dynamic could determine the arrangement of periodic body structures and the pattern of fur markings.
I generated the following image using the Turing Pattern plug-in for GIMP.
The kicker is that the above mentioned ScienceDaily article is entitled Control Mechanism For Biological Pattern Formation Decoded and it's about how biologists and mathematicians in Freiburg—hence the 'German flag' color scheme on my Turing Pattern—have found an example in nature of just what Turing predicted.
Biologists from the Max Planck Institute of Immunobiology in Freiburg, in collaboration with theoretical physicists and mathematicians at the University of Freiburg, have for the first time supplied experimental proof of the Turing hypothesis of pattern formation. They succeeded in identifying substances which determine the distribution of hair follicles in mice. Taking a system biological approach, which linked experimental results with mathematical models and computer simulations, they were able to show that proteins in the WNT and DKK family play a crucial role in controlling the spatial arrangement of hair follicles and satisfy the theoretical requirements of the Turing hypothesis of pattern formation. In accordance with the predictions of the mathematical model, the density and arrangement of the hair follicles change with increased or reduced expression of the WNT and DKK proteins.
There you go, Mr. Luskin: an example from natural biology of a simple process generating vast complexity. To your Woo, I say Schwiiing!
## Tuesday, November 28, 2006
### Spiral coolness
This is just too cool! (via Chaotic Utopia)
### Kissing Mirror Neurons
On my return trip from Thanksgiving vacation, I had the pleasure of taking DC's Metro to Union Station. At some point early in the trip, four college-aged girls boarded the train. I naturally noticed this because they were all hotties (two of them were super-hotties). I got a bit curious when I noticed that they formed two pairs that were uneasily close. Could it be??
Nah, probably just my imagination; besides, it's rude to stare. So I went back to reading my magazine. But they weren't about to let me do that--they were being noisy. And every time I looked up, my suspicions were bolstered. That's when I saw the blatant Public Display of Affection: "All right, lesbians!" Not staring was more difficult now as was holding back my excitement. At the next stop they got off the Metro and my ride got mundane again.
A famous comedienne (sorry I can't remember which one) once commented on how she didn't understand men's obsessions with lesbians. After all, lesbianism is the ultimate dismissal of masculinity; it should logically be threatening to men. But it's not. Why not?
That's actually a pretty interesting question. In a rational world, men wouldn't get turned on by girl on girl action, but believe me, they do. For a long time, my explanation for this derived from my rudimentary knowledge of evolutionary psychology. Males are out to spread their seed, so they see a lesbian coupling as an opportunity to jump in and procreate more. Females, on the other hand, want a man who will help rear her children, so homosexuals are a bad investment.
This hypothesis started to unravel for me, though. It seemed that every woman I brought the subject up with, was not only cool with having gay male companions, but would jump at the opportunity to go party at a gay bar. I realize that this is anecdotal and that their motives might not in fact be voyeuristic (but their mannerisms somehow gave me that deja-vu feeling of "All right, lesbians!"). This was seriously undermining my EP hypothesis; I needed something new.
On the Amtrak train back to Philly (with the "METRO incident" still fresh on my mind) I read an article about mirror neurons. Everything just clicked together and now I had my new pet hypothesis.
A mirror neuron is a neuron which fires both when an animal performs an action and when the animal observes the same action performed by another (especially conspecific) animal. Thus, the neuron "mirrors" the behavior of another animal, as though the observer were itself performing the action. These neurons have been observed in primates, including humans, and in some birds.
Mirror neurons were first discovered by Giacomo Rizzolatti and other Italian neuroscientists. They were first discovered in monkeys whose brains were wired up with electrodes; they were later confirmed to exist in humans (recent research suggests that humans are particularly well-endowed with mirror neurons). The interesting thing about mirror neurons is that they seem to be sensitive to intent. For example, in the monkey experiments, when the simian watched a hand pick up an object, the same neurons fired as when the monkey itself picked up that object; but when it watched a hand pretend to pick up a non-existent object, the neurons didn't fire. And this pattern was observed even when the monkey's view was obscured by a screen. In other words, when the monkey knew there was an object behind the screen, its (mirror) neurons fired when it watched the hand go behind the screen to pick up the object; but they failed to fire when the monkey knew there was nothing behind the screen.
It stands to reason that we have mirror neurons for kissing. These same neurons that fire when we kiss someone should also fire when we watch others kissing someone. And I would expect that if you're the kind of person who is aroused by kissing (I'll go ahead and aver that that's the predominance of humanity), watching others kiss should trigger some of those same feelings.
But how does this explain men's particular fascination with lesbians? My answer is "the necker cube effect." The Necker Cube is an optical illusion. It consists of 12 interconnected lines drawn on a flat surface. The human brain wants to see it in three dimensions and so adds depth to it. But it doesn't end there; there are two possible 3D configurations: with the lower square up front and with the upper square up front. Since both are possible, and since the brain can't "see" them simultaneously, it flips back and forth. I usually see the lower square up front first, then it starts to flip-flop back and forth.
Perhaps a more appropriate optical illusion is the "two ladies or one" illusion (are the two ladies about to kiss?) ;-)
One of my favorites, though, is the Lyondell cube. Below is my foam Lyondell cube. It is just a cube with a smaller cube cut out of one of its corners. But if you look at it from the right angle, the missing corner becomes a solid cube budding out from the main cube--then it reverts back to a hole. The effect is quite eerie when you hold the cube and wiggle and wobble it in your hand. Just freaky!
My hypothesis is that when watching lesbians kiss, men's kissing mirror neurons are activated, but then, just like the necker cube, they start to flip back and forth between which girl is activating the mirror neurons (and this adds extra excitement).
Since I came up with this hypothesis on the fly, I realize that
A) It may be total bunk, and/or
B) Someone else may have already come up with the same idea.
However I find it intriguing enough to just go with it.
On that note I'll leave you with a short YouTube video (I should probably insert an "adult content" warning here, but if you're the type who is offended by to consenting adults kissing, then you're probably also offended by my posts on religion. Which means that this weblog is not for you.)
And if my hypothesis is correct, I certainly wouldn't want to slight any straight females or gay males who may stumble upon this post.
### Belated Congratulations!
I'm a bit late doing this post (although I did leave a comment when it was fresh), but congratulations on the engagement of two excellent science bloggers (physics bloggers, no less).
Jennifer Ouellette of Cocktail Party Physics is one of my favorite bloggers because she's such a pleasure to read (I might just have to buy The Physics of BuffyVerse) and it doesn't hurt that she has me on her blogroll (Of course I still don't have a blogroll myself, but when I get around to it, she'll be there).
Sean Carroll of Cosmic Variance is also an awesome physics blogger. I must confess that I'm not as big a reader of CV as I am of CPP. (although how can you not love photographic evidence of Russell's teapot?)
Love found on the internet between two sciencephiles. What could be better?
Congratulations!
### Meme propagation experiment
There's a meme going around the net (via) and there's an experiment seeing how fast it spreads. It goes thus:
1. Please link to this post by Acephalous (as I'm doing)
2. Ask your readers to do the same (if you haven't already, remember, it's for SCIENCE!)
3. Ping Technorati. (and spell it correctly)
I am always willing to do my part for science. Be on the lookout for my upcoming experiment here I'll need my readers to send me money ;-)
### Sieg Heil, Mein Furry!
Yesterday I came accross an intersting site while browsing the internets. It's a website called Cats That Look Like Hitler. I guess you can find anything on the internet. My favorite Kitler is Frodo.
Although I must tip my hat to Charlie--the costume had me rolling on the floor.
What's next? Dogs that look like Saddam? Gerbils that look like Kim Jong Il? Personally, I'll just stick to the world leader/animal resemblence that is at the forefront right now.
Read the comment by the artist Chris Savido.
## Sunday, November 19, 2006
### Paper Art
I first saw this on A Blog Around The Clock. Now it seems someone has put the images together into a video slideshow. These were all made with just a single sheet of paper and scissors. Pretty cool!
## Sunday, November 12, 2006
### 0.000... > 0
When I was in high school, I learned that 0.999... = 1. I found it shocking at first, but after thinking about it, I realized that the proof was airtight. But recently, the "controversy" has reared its head again on the internet--here, here, and here (as a poll no less, since the best way to find mathematical truths is by quorum).
At first I read the threads with amusement, but gradually the counter-arguments began to convert me. I now realize that not only is 0.999... ≠ 1, but also that 0.000... ≠ 0. It simply follows from 1 - 0.999... = 0.000... since 0.999... ≠ 1, then 0.000... ≠ 0. And furthermore, all the brilliant proofs for the former also apply to the latter.
I have assembled below a list of said proofs which I've slightly modified to prove that 0.000... > 0. Enjoy!
I now understand how this conclusion is reached. but unlike how the article suggests I have no problem in thinking in the infinite. I have no problem with the 'concept' of 0.000~ as a forever continuing sequence of digits. I accept that in all practical purposes 0.000~ might as well be 0 and that math solutions calculate it to be 0. I also accept that it is impossible to have 0.000~ of anything (you cannot hold an infinity). But this does not stop 0.000~ (as a logical concept) forever being >0.
On to the main issue: 0.0000000~ infinite 0s is NOT equal to 0, because 0.0000000~infinite 0s is not a number. The concept of an infinite number of 0s is meaningless (or at least ill-defined) in this context because infinity is not a number. It is more of a process than anything else, a notion of never quite finishing something.
However, we can talk intelligently about a sequence:
{.0, .00, .000, ... }
in which the nth term is equal to sum(0/(10^i), i=1..n). We can examine its behavior as n tends to infinity.
It just so happens that this sequence behaves nicely enough that we can tell how it will behave in the long term. It will converge to 0. Give me a tolerance, and I can find you a term in the sequence which is within this tolerance to 0, and so too will all subsequent terms in the sequence.
The limit is equal to 0, but the sequence is not. A sequence is not a number, and cannot be equated to one.
We hold 1/3 = 0.333~
but as 0.333~ - 0.333~ = 0.000~ and 0.000~ ≠ 0.0 and 1/3 - 1/3 = 0/1 then surely 0.333~ ≠ 1/3.
Confusing fractions and decimal just highlights the failings of decimal math. 0.000~ does not equal 0.0. If it did, the 0.000~ would simply not exist as a notion. It’s very existence speaks of a continually present piece. The very piece that would not render it 0.0. It keeps approaching emptyness by continually adding another decimal place populated by a 0, which does nothing to diminish the fact that you need to add yet more to it to make it the true 0.0 and so on to infinity.
There is obviously an error in the assumption that 1/3 = 0.333~ or that it highlights the fact that decimal can not render 1/3 accurately. Because 0.000~ ≠ 0.0
Ah I see the problem.. It's just a rounding error built into the nature of decimal Math. there is no easy way to represent a value that is half way between 0.000~ and 0.0 in decimal because the math isn’t set up to deal with that. Thus when everything shakes out the rounding error occurs (the apparent disparity in fractions and decimal)
No it does not. by it's very nature 0.000000000000rec is always just slightly greater than 0.0 thus they are not equal.
But for practical purposes then it is safe to conclude equivalency as long as you remember that they are not in reality equivalent.
0.00000~ is infinitely close to 0.
For practical purposes (and mathematically) it is 0.
But is it really the same as 0?
I don't know.
0.00000~ is not per definition equal to 0. This only works in certain fields of numbers.
What worries me about this proof is that it assumes that 0.0000~ can sensibly be multiplied by 10 to give 00.0000~ with the same number of 0s after the decimal point. Surely this is cheating? In effect, an extra 0 has been sneaked in, so that when the lower number is subtracted, the 0s disappear.
The other problem I have is that no matter how many 0s there are after the decimal point, adding an extra 0 only ever takes you 0/10 of the remaining distance towards unity... so even an infinite number of 0s will still leave you with a smidgen, albeit one that is infinitely small (still a smidgen nevertheless).
In reality,I think 0.0..recurring is 0.
But if the 'concept' of infinity exists, then as a 'concept' .0 recurring is not 0.
From what I know, the sum to infinity formula was to bridge the concept of infinity into reality (to make it practical), that is to provide limits.*
It's like the "if i draw 1 line that is 6 inches and another that is 12, conceptually they are made up of the same number of infinitesimally small points" but these 'points' actually dont exist in reality.
Forgot the guy who came up with the hare and tortoise analogy, about how the hare would not be able to beat the tortoise who had a head-start - as the hare had to pass an infinite number of infinitesimally small points before he'd even reach the tortoise.
He used that as 'proof' that reality didn't 'exist' rather than what was 'obvious' to me (when I heard it) - that infinity didn't exist in reality.
So my conclusion is 0.0 recurring is conceptually the infinitesimal small value numerically after the value 0. (If anyone disagrees, then what is the closest value to 0 that isn't 0 and is greater than 0(mathematically)?)
In reality, it is 0 due to requirements of limits.
Can anyone prove the sum to inifinity formula from 'first prinicipals'?
Okay, non-math-geek, here. Isn't there some difference between a number that can be expressed with a single digit and one that requires an INFINITE number of symbols to name it? I've always imagined that infinity stretches out on either side of the number line, but also dips down between all the integers. Isn't .0000etc in one of those infinite dips?
Haha not only are there holes in your logic, but there are holes in your mathematics.
First of all, by definition the number .00000000... cannot and never will be an integer. An integer is a whole number. .00000000... is not, obviously, hence the ...
The ... is also a sad attempt at recreating the concept of infinity. I only say concept because you can't actually represent infinity on a piece of paper. Except by the symbol ∞. I found a few definitions of infinity, most of them sound like this: "that which is free from any possible limitation." What is a number line? A limitation. For a concrete number which .0000000... is not. (Because it's continuing infinitely, no?)
Also, by your definition, an irrational number is a number that cannot be accurately portrayed as a fraction. Show me the one fraction (not addition of infinite fractions) that can represent .00000000...
You can't, can you?
Additionally, all of your calculations have infinitely repeating decimals which you very kindly shortened up for us (which you can't do, because again, you can't represent the concept of infinity on paper or even in html). If you had stopped the numbers where you did, the numbers would have rounded and the calculation would indeed, equal 0.
Bottom line is, you will never EVER get 0/1 to equal .0000000... You people think you can hide behind elementary algebra to fool everyone, but in reality, you're only fooling yourselves. Infinity: The state or quality of being infinite, unlimited by space or time, without end, without beginning or end. Not even your silly blog can refute that.
When you write out .00000000... you are giving it a limit. Once your fingers stopped typing 0s and started typing periods, you gave infinity a limit. At no time did any of your equations include ∞ as a term.
In any case, Dr. Math, a person who agrees with your .000000 repeating nonsense, also contradicts himself on the same website. "The very sentence "1/infinity = 0" has no meaning. Why? Because
"infinity" is a concept, NOT a number. It is a concept that means
"limitlessness." As such, it cannot be used with any mathematical
operators. The symbols of +, -, x, and / are arithmetic operators, and
we can only use them for numbers."
Wait, did I see a fraction that equals .00000 repeating? No I didn't. Because it doesn't exist.
And for your claim that I have to find a number halfway between .0000 repeating and 0 is absurd. That's like me having you graph the function y=1/x and having you tell me the point at which the line crosses either axis. You can't. There is no point at which the line crosses the axis because, infinitely, the line approaches zero but will never get there. Same holds true for .0000 repeating. No matter how many 0s you add, infinitely, it will NEVER equal zero.
Also, can I see that number line with .000000000000... plotted on it? That would be fascinating, and another way to prove your point.
And is .00000000... an integer? I thought an integer was a whole number, which .00000000... obviously is not.
Even with my poor mathematical skills I can see very clearly that while 0 may be approximately equal to 0.000000000... ("to infinity and beyond!"); this certainly does not mean that 0 equals 0.000000000...
It's a matter of perspective and granularity, if you have low granularity then of course the 2 numbers appear to be the same; at closer inspection they are not.
I'm no mathematics professor, and my minor in mathematics from college is beyond a decade old, but you cannot treat a number going out to infinity as if it were a regular number, which is what is trying to be done here. Kind of the "apples" and "oranges" comparison since you cannot really add "infinity" to a number.
Yes, any number going out to an infinite number of decimal points will converge upon the next number in the sequence (eg: .000000... will converge so closely to 0 that it will eventually become indistinguishable from 0 but it will not *be* 0).
The whole topic is more of a "hey, isn't this a cool thing in mathematics that really makes you think?" than "let's actually teach something here."
.00000... equals 0 only if you round down! It will always be incrementing 1/millionth, 1/billionth, or 1/zillionth of a place, (depending on how far you a human actually counts). If we go out infinitely, there is still something extra, no matter how small, that keeps .0000000... for actually being 0.
I don't agree, actually. I do believe in a sort of indefinable and infinitely divisible amount of space between numbers ... especially if we break into the physical world ... like ... how small is the smallest thing? an electron? what is that made up of? and what is that made up of? Is there a thing that is just itself and isn't MADE UP OF SMALLER THINGS? It's really hard to think about ... but I think it's harder to believe that there is one final smallest thing than it is to believe that everything, even very small things, are made up of smaller things.
And thus ... .0000 repeating does not equal zero. It doesn't equal anything. It's just an expression of the idea that we can't cut an even break right there. Sort of like thirds. You cannot cut the number 1 evenly into thirds. You just can't. It's not divisible by 3. But we want to be able to divide it into thirds, so we express it in this totally abstract way by writing 1/3, or .3333 repeating. But, if .0000 repeating adds up to 0, than what does .33333 repeating add up to? and don't say 1/3, because 1/3 isn't a number ... it's an idea.
That's my rational.
The problem is with imagining infinite numbers.
When you multiply .000... with 10 there is one less digit on the infinite number of result which is 0.000 .... minus 0.000...0. It is almost impossible in my opinion to represent graphically .000..x10 in calculation, hence confusion.
I know it is crazy to think of last number of infinite number but infinite numbers are crazy itself.
Through proofs, yes, you have "proven" that .0 repeating equals 0 and also through certain definitions.
But in the realm of logic and another definition you are wrong. .0 repeating is not an integer by the definition of an integer, and 0 most certainly is an integer. Mathematically, algebraicly...whatever, they have the same value, but that doesn't mean they are the same number.
I'm getting more out of "hard" mathematics and more into the paradoxical realm. Have you ever heard of Zeno's paradoxes? I think that's the most relevant counter-argument to this topic. Your "infinity" argument works against you in this respect. While you can never come up with a value that you can represent mathematically on paper to subtract from .000... to equal zero or to come up with an average of the two, that doesn't mean that it doesn't conceptually exist. "Infinity" is just as intangible as whatever that missing value is.
But really in the end, this all just mathematical semantics. By proof, they are equal to each other but otherwise they are not the same number.
It is obvious to me that you do not understand the concept of infinity. Please brush up on it before you continue to teach math beyond an elementary school level. The problem with your logic is that .0 repeating is not an integer, it is an estimation of a number. While .0 repeating and 0 behave identical in any and all algebraic situations, the two numbers differ fundamentally by an infinitely small amount. Therefore, to say that .0 repeating and 0 are the same is not correct. As you continue .0000000... out to infinity, the number becomes infinitely close to 0, however it absolutely never becomes one, so your statement .000 repeating =0 is not correct.
I wrote a short computer program to solve this.
CODE:
Try
If 0 = 0.0000000000... Then
Print True
Else
Print False
End If
Catch Exception ex
Print ex.message
End Try
The result: "Error: Can not convert theoretical values into real world values."
There you have it folks! End of discussion.
If you could show me a mathematical proof that 1 + 1 = 3, that does not mean 1 + 1 = 3, it means there is something wrong with the laws of our math in general.
We know instinctively that 0 does not equal 0.000000...
If you can use math to show differently, then that proves not that 0 = 0.00000... but that there is something wrong with your math, or the laws of our math itself.
Thus, every proof shown in these discussions that tryed to show 0=0.000... is wrong.
0 != 0.000...
The problem here is that usualy only math teachers understand the problem enough to explain it, and unfortunatly they are also the least likly candidates to step out of the box and dare consider the laws of math that they swear by are actualy at fault.
### Would a recount have made a difference?
A couple of days ago George Allen conceded the Virginia Senatorial race.
It was the right move. Here's a quote from his speech (emphasis mine):
"A lot of folks have been asking about the recount. Let me tell you about the recount.
I've said the people of Virginia, the owners of the government, have spoken. They've spoken in a closely divided voice. We have two 49s, but one has 49.55 and the other has 49.25, after at least so far in the canvasses. I'm aware this contest is so close that I have the legal right to ask for a recount at the taxpayers' expense. I also recognize that a recount could drag on all the way until Christmas.
It is with deep respect for the people of Virginia and to bind factions together for a positive purpose that I do not wish to cause more rancor by protracted litigation which would, in my judgment, not alter the results."
I would agree that it wouldn't have altered the results. In fact, when I first conceived of this post, I had envisioned it as a "why Allen should concede" post--little did I know how quickly he would do just that. To understand why, we need to review a little statistics theory.
Last Monday, Dalton Conley wrote a piece in the New York Times entitled The Deciding Vote. In it he explains a fundamental of "statistical dead-heat" elections.
The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get — you guessed it — four different results. That’s the nature of large numbers — there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.
But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.
What this means is that the vote count in an election is not "the true" count, but rather a poll with a very large sample size, and can thus be treated as such. He goes on to offer a suggestion for determining a winner, which if not met should trigger a run-off election.
In an era of small town halls and direct democracy it might have made sense to rely on a literalist interpretation of “majority rule.” After all, every vote could really be accounted for. But in situations where millions of votes are cast, and especially where some may be suspect, what we need is a more robust sense of winning. So from the world of statistics, I am here to offer one: To win, candidates must exceed their rivals with more than 99 percent statistical certainty — a typical standard in scientific research. What does this mean in actuality? In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes cast.
If this sounds like gobledy-gook to you, let me try to clarify it by throwing some Greek letters at you. I couldn't find any of my old Statistics texts, but the Wikipedia article is actually quite good, so I will draw from it. (For some even better statistics primers, check out Zeno and Echidne.) Let's start with some definitions (according to Wiki)
The margin of error expresses the amount of the random variation underlying a survey's results. This can be thought of as a measure of the variation one would see in reported percentages if the same poll were taken multiple times. The margin of error is just a specific 99% confidence interval, which is 2.58 standard errors on either side of the estimate.
Standard error = $\sqrt{\frac{p(1-p)}{n}}$ ,where p is the probability (in the case of an election, it is the vote percentage. So for a dead-heat race, p=~ 0.5), and n is the sample size (total number of voters).
What does this mean? Since we are looking at a ballot count as a poll, we can use the margin of error to be the random variation we would get from multiple recounts. (The word random is important here. None of these formulas hold if the variation is due to malfeasance).
I won't try to explain where the standard error formula comes from, but I'll try to give some perspective. We can break it into two parts: the numerator and the denominator. The numerator p(1-p) has a maximum when p=0.5 (since 0 < p < 1). This means that the further you get from 50%, the smaller the standard error will be. Therefore, the standard error in a blow-out will be smaller than thatfrom a tie. Since the denominator is inversely proportional to the standard error, the standard error will get smaller as n (# of voters) gets larger. So the more voters you have, the smaller the error you get. One consequence of this is that you reach a point where your standard error is small enough that increasing the sample size gains you very little. (Check out Zeno's excellent post on sample size).
Again, I'll leave it up to the reader to look up how the confidence interval formula is derived--it's a bit beyond the scope of this post. What it means is that since the margin of error is the expected variation from sampling to sampling, we can see it as a multiple of standard errors from the results. And the higher the confidence interval, the more standard errors go into the margin of error. Another way of looking at it is that if you want to be 99% confident that a recount will fall into a certain interval around your result, that interval will need to be wider than if you only wanted to be 68% confident. According to Wiki (again, I'll let you look up the derivation if you wish)
Plus or minus 1 standard error is a 68 % confidence interval, plus or minus 2 standard errors is approximately a 95 % confidence interval, and a 99 % confidence interval is 2.58 standard errors on either side of the estimate.
Therefore,
Margin of error (99%) = 2.58 × $\sqrt{\frac{0.5(1-0.5)}{n}} = \frac{1.29}{\sqrt{n}}$
Which is the formula Dalton mentioned in his article. Anyway, I hope my condensed explanation at least helps a little to explain what those numbers mean.
Now, on to the Virginia race. The total votes cast, n=2,338,111 (F0r simplicity, I'll be ignoring the Independent candidate Parker and rounding out to p=0.5, so as to use the above formula.) therefore the margin of error is 0.08% which comes out to 1972.5 votes. That means that we can be 99% sure that a recount of Allen's votes will be +/- 1972.5 votes of what it was before. The actual vote count difference between Allen and Webb was 7231 votes--well outside the margin of error. 7231 votes corresponds to a confidence interval of 9.5 standard errors. Allen could've spent the rest of his life recounting the votes and not expected to alter the results. He was absolutely right to concede.
## Saturday, November 11, 2006
### Lithium Ion battery fire
I found this video today of a laptop lithium ion battery fire. It was done under controlled conditions, so I'm not sure how precisely this represents what could happen to my (or your) laptop. Since I've written about this subject before, I was very interested to watch.
## Saturday, November 04, 2006
### Richard Dawkins in Philadelphia
On Thursday, Richard Dawkins came to Philadelphia as part of The God Delusion book tour. Since I've been a fan of his writing for many years now, I had to attend. I was able to get off work early, but I still got to the event late. The auditorium was full and the spillover crowd was mobbed around a closed-circuit television showing the lecture live. I didn't exactly have the best seat in the house, but I was able to catch most of it. He essentially read excerpts from his book and threw in a few personal anecdotes. Much of the talk centered around Biblical evidence supporting the now almost-famous line opening Chapter 2 (page 31).
"The God of the OldTestament is arguably the most unpleasant character in all fiction: jealous and proud of it; a petty, unjust, unforgiving control-freak; a vindictive, bloodthirsty ethnic cleanser; a misogynistic, homophobic, racist, infanticidal, genocidal, filicidal, pestilential, megalomaniacal, sadomasochistic, capriciously malevolent bully."
I have to confess that I just bought my copy on Wednesday and haven't had a chance to read it yet. (I'm still about a hundred pages shy of finishing The Ancestor's Tale.) All indicatons are that it's going to be a very good read.
Later that evening, Dr. Dawkins appeared on The Rational Response Squad show for a 60 minute round table discussion. I found it quite interesting to see him in a setting other than a standard interview or rehearsed speech. The part I found most interesting was at one point, he brought up how many of his critics say that for political reasons he shouldn't make himself so prominent; quotes like "Darwinian natural selection is what led me to become an atheist (my paraphrase, I don't remember the exact quote)" hurt the cause. He said it was a strong argument, that maybe they were right, and asked what his fellow panelists thought about it. That, to me, exemplifies good scientific/rational thinking. You must always be willing to listen to smart people and question your own beliefs and rationales. Kudos to Dawkins for being able to do that.
Personal note:
When I found out that Dawkins was coming to town, I started searching for just the right thing to wear. I settled on a DNA double-helix necktie. I was hoping I'd actually get to talk to him, but it soon became apparent that that wouldn't happen. After waiting in the book signing queue for 20 minutes, one of the ushers came around telling everyone that there wouldn't be time to personalize autographs and that the author would only be signing his name. "Please have your book open to the title page." At that point, my only hope was that he would appreciate my tie.
When I got up there, I told him how I enjoyed the talk, as he autographed my book. When he gave me the book back, I slowly backed away from the table. Then he said "I really like the tie."
Now I know how a star-struck teenaged groupie feels when she finally gets to meet the idol whose posters adorn her bedroom walls.
"(sigh)," he fluttered "I'll never wash this tie again."
|
{}
|
# How do I spawn trees so that they stick out perpendicular to a 3D planet?
I have a 3D tree that I want to clone so that it is perpendicular to a planet gameobject, so it looks upright when spawned. I tried by copying down all desired rotations of the tree and adding them into a dictionary full of vector3’s so that I can access any one of them when I need to. However, using Quaternion.Euler() doesn’t copy the coordinates exactly when I pass them in; they are always wrong. Below is the code:
public void PlaceFauna(GameObject prefab, Vector3 position, Vector3 rotation, GameObject icosphere)
{
prefab.transform.position = position; // Moves prefab into scene and sets position
prefab.transform.rotation = Quaternion.EulerAngles[rotation]; // Sets rotation to that of the one in the dictionary
GameObject clone = Instantiate(prefab) as GameObject; // Creates clone
clone.name = clone.GetInstanceID().ToString(); // Gives clone a unique name
clone.transform.parent = icosphere.transform; // Puts clone under region
prefab.transform.position = new Vector3(0, 0, 40); // Moves prefab out of scene
}
It takes in the position and the rotation from the dictionary and applies .Euler() to the prefab so that the tree (should) match the position. However it does not do this. Where am I going wrong?
prefab.transform.rotation = Quaternion(FromToRotation(new Vector3(0, 0, 1), position);
|
{}
|
## anonymous 3 years ago Find the value of this limit:
1. anonymous
2. hartnn
write x^2-1 as (x+1)(x-1)
3. anonymous
Okay, did that, I suspecting something will cancel somewhere?
4. anonymous
there is an established proof that limit z-->0 (sin(z))/z is 1 put z = x+1 z-->0 => x-->-1 write denominator as (x-1)*(x+1) limit will be (1/2)*1 which is 0.5
5. anonymous
@kulprit the limit is going to -1 not 0, doesn't that matter?
6. anonymous
Should be -1/2, I think...
7. anonymous
limit z-->0 (sin(z))/z is 1 and so limit x-->-1 (sin(x+1))/(x+1) is 1 just substitute k = x+1 x-->-1 ,so x+1-->0 => k-->0
8. anonymous
yeah sorry @Jemurray3 limit is -0.5 disregard that
9. anonymous
I understand how the lim x->0 sin(x)/x = 1 but I don't understna dhow we can use this when this limit is going to -1, sorry for being difficult :)
10. anonymous
@Jemurray3, can you help me understand how we can use this trig limit even when x approaches -1 and not 0?
11. anonymous
you have lim (x->-1) sin(x+1)/(x+1)
12. anonymous
let u = x+1. As x -> -1, u -> 0, so the above is the same as lim (u -> 0 ) sin(u)/u
13. anonymous
should be 1/(x-1) I believe but yes, that's the idea.
14. anonymous
$\frac{ 1 }{ x-1 } \lim_{x \rightarrow -1} \frac{ \sin(x+1) }{ x+1 }$
15. anonymous
how can we say thay u ->0?
16. anonymous
because (x+1) goes to zero as x goes to -1.
17. anonymous
ooohhh gotcha
18. anonymous
what do we do with the 1/(x-1)
19. anonymous
Nothing, that's perfectly continuous as x approaches -1 so that just becomes -1/2.
20. anonymous
is that where we get the -0.5?
21. anonymous
oh.. k good. thanks so much
22. anonymous
Oh, I wasn't paying attention... that should be on the right side of the limit. so it should be $\lim_{x \rightarrow -1} \frac{\sin(x+1)}{x^2-1} = \lim_{x \rightarrow -1} \frac{\sin(x+1)}{x+1}\cdot \frac{1}{x-1}$ $= \frac{-1}{2}$
23. anonymous
oh right, because it's not a constant right?
24. anonymous
right.
25. anonymous
thanks, you rock.
|
{}
|
+0
# Range
0
46
1
The range of the function g(x) = 2/(2 + 4x + 3x^2) can be written as an interval (a,b]. What is a+b?
Jun 22, 2022
#1
+13793
+1
What is a+b?
Hello Guest!
$$g(x)=2/f(x)\\ f(x)=3x^2+4x+2\\ \frac{df(x)}{dx}=6x+4=0\\ x_{max}=-\dfrac{2}{3}\\ {\color{blue}g(x)_{max}=}2/(3\cdot \frac{4}{9}-\frac{8}{3}+2)=\color{blue}3$$
$$R\in \mathbb R$$ | 0 < R $$\leq$$ 3
$$\color{blue}a + b = 0 + 3 = 3$$
!
Jun 22, 2022
edited by asinus Jun 22, 2022
edited by asinus Jun 22, 2022
|
{}
|
[NTG-context] Rule under length of last line
Hans Hagen pragma at wxs.nl
Mon Jul 31 21:12:40 CEST 2006
Taco Hoekwater wrote:
> Duncan Hothersall wrote:
>
>> We generate the ConTeXt code from XML, so ideally a solution wouldn't
>> require the last line to be set separately, but would just work whether
>> the heading was single or multiple line. What I'm really looking for is
>> a subsection setup that will automatically do this whatever length of title.
>>
>
> It is easier than you think:
>
> \def\Myway#1%
> {#1\vrule height 0pt depth 6pt width 0pt}% title + force 6pt
> \optimizedisplayspacingtrue\setlastlinewidth % core-mat macro
> \hrule width \the\lastlinewidth}
>
>
> The key element is \setlastlinewidth, which measures the width
> of the final line of the current paragraph.
>
i was thinking of that as a third solution but somehow you trust \setlastlinewidth more than i do -)
Hans
-----------------------------------------------------------------
|
{}
|
# zbMATH — the first resource for mathematics
Asymptotic behavior of solutions of nonlinear difference equations. (English) Zbl 1080.39501
Summary: The nonlinear difference equation $x_{n+1}-x_n=a_n\varphi _n(x_{\sigma (n)})+b_n, \tag{$$\text{E}$$}$ where $$(a_n), (b_n)$$ are real sequences, $$\varphi _n\: \mathbb R\rightarrow \mathbb R$$, $$(\sigma (n))$$ is a sequence of integers and $$\displaystyle\lim _{n\rightarrow \infty }\sigma (n)=\infty$$, is investigated. Sufficient conditions for the existence of solutions of this equation asymptotically equivalent to the solutions of the equation $$y_{n+1}-y_n=b_n$$ are given. Sufficient conditions under which for every real constant there exists a solution of equation (E) convergent to this constant are also obtained.
|
{}
|
## 11.23 Box Plot Distributions
REVIEW
ds %>%
mutate(year=factor(format(ds\$date, "%Y"))) %>%
ggplot(aes(x=year, y=max_temp, fill=year)) +
geom_boxplot(notch=TRUE) +
theme(legend.position="none")
A box plot, also known as a box and whiskers plot, shows the median (the second quartile) within a box which extends to the first and third quartiles. We note that each quartile delimits one quarter of the dataset and hence the box itself contains half the dataset.
Colour is added simply to improve the visual appeal of the plot rather than to convey new information. Since we include fill= we also turn off the otherwise included legend.
Here we observe the overall change in the maximum temperature over the years. Notice the first and last plots which probably reflect truncated data, providing motivation to confirm this in the data, before making significant statements regarding these observations.
Your donation will support ongoing development and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984.
|
{}
|
# Collaborative filtering¶
• Collaborative filtering: Generally, the process of filtering out some data by collaborating data from different data sources/agents. Specifically, this process with regards to building recommendation systems. Making predictions (filtering) by collecting data from lots of different users about their preferences/habits (collaboration).
• Matrix decomposition/factoring: The process of taking a single matrix, and then expressing it as the product of different matrices. You can think of gradient descent as a way of doing this. For example, if we have a table of data (matrix) that shows {users}x{movies}, and the matrix is filled with the users' scores for those movies, we could have two matrices -- one for movie factors, and one for user factors. We could then set the properties for those two matrices via gradient descent so that their product results (as closely as possible) as the actual scores users gave the movies. So we have taken the score matrix, and expressed it as the product of the movie factor and user factor matrices. (Actually, our operations are not technically matrix decomposition because we will in 0 values.)
• Observed features/Latent features: (aka "factors" or "variables") Observed features are the features that are explicitly read into the model. For example, words in a text. Latent features are the "hidden" features -- usually "discovered" by some aggregate of the observed features. For example, the topic of a text. Thinking of the matrix decomposition above, each column for a movie could represent some value -- special effects, year of release, etc. An each row of the user matrix could represent how much they value that feature. Those features, like special effects, etc., would be the latent features. The features that can't be directly observed vs. those that can.
Let's take a look at collab_filter.xlsx as an example:
This whole process could be thought of as collaborative filtering using matrix decomposition. (We're breaking our matrix into 2 different matrices and using it to make some predictions).
The matrix above the movies and the matrix to the left of the users are embedding matrices for those things.
So, in summary, the process for this shallow learning is:
1. Init your user/movie/movie scores matrix using randomly initialized embedding matrices for the movies and users
2. Set up your cost function
3. Minimize your cost function using gradient descent, thus setting more accurate embedding matrix values
## Collaborative filtering using fast.ai¶
In [1]:
# First do our usual imports
%matplotlib inline
from fastai.learner import *
from fastai.column_data import *
In [2]:
# Then set up the path
path = "data/ml-latest-small/"
In [3]:
# Take a look at the data
# We can see it contains a userId, movieId, and rating. We want to predict the rating.
Out[3]:
userId movieId rating timestamp
0 1 31 2.5 1260759144
1 1 1029 3.0 1260759179
2 1 1061 3.0 1260759182
3 1 1129 2.0 1260759185
4 1 1172 4.0 1260759205
In [4]:
# We can also get the movie names too
Out[4]:
movieId title genres
0 1 Toy Story (1995) Adventure|Animation|Children|Comedy|Fantasy
2 3 Grumpier Old Men (1995) Comedy|Romance
3 4 Waiting to Exhale (1995) Comedy|Drama|Romance
4 5 Father of the Bride Part II (1995) Comedy
In [5]:
# Though not required for modelling, we create a cross tab of the top users and top movies, like we had in our Excel file
# First get the users who have given the most ratings
group = ratings.groupby('userId')['rating'].count()
topUsers = group.sort_values(ascending=False)[:15]
topUsers
Out[5]:
userId
547 2391
564 1868
624 1735
15 1700
73 1610
452 1340
468 1291
380 1063
311 1019
30 1011
294 947
509 923
580 922
213 910
212 876
Name: rating, dtype: int64
In [6]:
# Now get the movies which are the highest rated
group = ratings.groupby('movieId')['rating'].count()
topMovies = group.sort_values(ascending=False)[:15]
topMovies
Out[6]:
movieId
356 341
296 324
318 311
593 304
260 291
480 274
2571 259
1 247
527 244
589 237
1196 234
110 228
1270 226
608 224
1198 220
Name: rating, dtype: int64
In [7]:
# Now join them together
top_ranked = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
In [8]:
top_ranked = top_ranked.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_ranked.userId, top_ranked.movieId, top_ranked.rating, aggfunc=np.sum)
Out[8]:
movieId 1 110 260 296 318 356 480 527 589 593 608 1196 1198 1270 2571
userId
15 2.0 3.0 5.0 5.0 2.0 1.0 3.0 4.0 4.0 5.0 5.0 5.0 4.0 5.0 5.0
30 4.0 5.0 4.0 5.0 5.0 5.0 4.0 5.0 4.0 4.0 5.0 4.0 5.0 5.0 3.0
73 5.0 4.0 4.5 5.0 5.0 5.0 4.0 5.0 3.0 4.5 4.0 5.0 5.0 5.0 4.5
212 3.0 5.0 4.0 4.0 4.5 4.0 3.0 5.0 3.0 4.0 NaN NaN 3.0 3.0 5.0
213 3.0 2.5 5.0 NaN NaN 2.0 5.0 NaN 4.0 2.5 2.0 5.0 3.0 3.0 4.0
294 4.0 3.0 4.0 NaN 3.0 4.0 4.0 4.0 3.0 NaN NaN 4.0 4.5 4.0 4.5
311 3.0 3.0 4.0 3.0 4.5 5.0 4.5 5.0 4.5 2.0 4.0 3.0 4.5 4.5 4.0
380 4.0 5.0 4.0 5.0 4.0 5.0 4.0 NaN 4.0 5.0 4.0 4.0 NaN 3.0 5.0
452 3.5 4.0 4.0 5.0 5.0 4.0 5.0 4.0 4.0 5.0 5.0 4.0 4.0 4.0 2.0
468 4.0 3.0 3.5 3.5 3.5 3.0 2.5 NaN NaN 3.0 4.0 3.0 3.5 3.0 3.0
509 3.0 5.0 5.0 5.0 4.0 4.0 3.0 5.0 2.0 4.0 4.5 5.0 5.0 3.0 4.5
547 3.5 NaN NaN 5.0 5.0 2.0 3.0 5.0 NaN 5.0 5.0 2.5 2.0 3.5 3.5
564 4.0 1.0 2.0 5.0 NaN 3.0 5.0 4.0 5.0 5.0 5.0 5.0 5.0 3.0 3.0
580 4.0 4.5 4.0 4.5 4.0 3.5 3.0 4.0 4.5 4.0 4.5 4.0 3.5 3.0 4.5
624 5.0 NaN 5.0 5.0 NaN 3.0 3.0 NaN 3.0 5.0 4.0 5.0 5.0 5.0 2.0
### Collaborative filtering¶
Now we will do the actual collaborative filtering. This is pretty similar to our previous processes.
In [12]:
# First, get the cross validation indexes -- a random 20% of rows we can use for validaton
val_idxs = get_cv_idxs(len(ratings))
# Weight decay. This will be covered later. This means 2^-4 (0.0625)
wd = 2e-4
# This is the depth of the embedding matrix. Can be thought of as the number of latent features. (see note above)
n_factors = 50
In [13]:
# Now declare our data and learner
# We pass in the two columns and the thing we want to predict -- like we had in our Excel example earlier
collaborative_filter_data = CollabFilterDataset.from_csv(path, 'ratings.csv', 'userId', 'movieId', 'rating')
learn = collaborative_filter_data.get_learner(n_factors, val_idxs, 64, opt_fn=optim.Adam)
In [15]:
# Do the learning
# These params were figured out using trials, like usual
learn.fit(1e-2, 2, wds=wd, cycle_len=1, cycle_mult=2)
epoch trn_loss val_loss
0 0.831135 0.810703
1 0.791689 0.780824
2 0.617506 0.765011
Out[15]:
[0.76501125]
The evaluation method here is MSE -- mean squared error (sum of actual value-predicted value)^2/num of samples). So we'll take the square root to get our RMSE.
In [16]:
math.sqrt(0.765)
Out[16]:
0.8746427842267951
### Movie bias¶
Our bias affects the movie rating, so we can also think of it as a measure of how good/bad movies are.
In [19]:
# First, convert the IDs to contiguous values, like we did for our model.
movie_names = movie_details.set_index('movieId')['title'].to_dict()
group = ratings.groupby('movieId')['rating'].count()
top_movies = group.sort_values(ascending=False).index.values[:3000]
top_movie_idx = np.array([collaborative_filter_data.item2idx[o] for o in top_movies])
If we want to view the layers in our PyTorch model, we can just call it.
So below we have a model wth two embedding layers, and then two bias layers -- one of user biases, and one for item biases (in this case, items = movies).
You can see the 0th element is the number of items, and the 1st element is the number of features. For example, in our user embedding layer, we have 671 users and 50 features, in our item bias layer we have 9066 movies and 1 bias for each movie, etc.
In [20]:
model = learn.model
model.cuda()
Out[20]:
EmbeddingDotBias(
(u): Embedding(671, 50)
(i): Embedding(9066, 50)
(ub): Embedding(671, 1)
(ib): Embedding(9066, 1)
)
Here we take our top movie IDs and pass them into the item bias layer to get the biases for the movie.
Note: PyTorch lets you do this -- pass in indices to a layer to get the corresponding values. The indicies must be converted to PyTorch Variables first. Recall that a variable is basically like a tensor that supports automatic differentiation.
We then convert the resulting data to a NumPy array so that work can be done on the CPU.
In [23]:
# Take a look at the movie bias
# Input is a movie id, and output is the movie bias (a float)
movie_bias = to_np(model.ib(V(top_movie_idx)))
In [24]:
movie_bias
Out[24]:
array([[ 0.85251],
[ 0.89408],
[ 1.31877],
...,
[ 0.22685],
[-0.03515],
[ 0.24388]], dtype=float32)
In [33]:
movie_bias.shape
Out[33]:
(3000, 1)
In [ ]:
# Zip up the movie names with their respective biases
movie_ratings = [(b[0], movie_names[i]) for i,b in zip(top_movies, movie_bias)]
Now we can look at top and bottom rated movies, corrected for reviewer sentiment, and the different types of movies viewers watch.
In [30]:
# Sort by the 0th element in the tuple (the bias)
sorted(movie_ratings, key=lambda o: o[0])[:15]
Out[30]:
[(-0.9562768, 'Battlefield Earth (2000)'),
(-0.73659664, 'Anaconda (1997)'),
(-0.7353736, 'Speed 2: Cruise Control (1997)'),
(-0.7109455, 'Wild Wild West (1999)'),
(-0.6921251, 'Mighty Morphin Power Rangers: The Movie (1995)'),
(-0.6649571, 'Super Mario Bros. (1993)'),
(-0.655268, 'Batman & Robin (1997)'),
(-0.63718784, 'Haunting, The (1999)'),
(-0.59907967, 'Flintstones, The (1994)'),
(-0.59654623, 'Superman III (1983)'),
(-0.58483046, 'Congo (1995)'),
(-0.5782997, 'Showgirls (1995)'),
(-0.57199323, 'Little Nicky (2000)'),
(-0.5705105, 'Message in a Bottle (1999)')]
In [31]:
# (Same as above)
sorted(movie_ratings, key=itemgetter(0))[:15]
Out[31]:
[(-0.9562768, 'Battlefield Earth (2000)'),
(-0.73659664, 'Anaconda (1997)'),
(-0.7353736, 'Speed 2: Cruise Control (1997)'),
(-0.7109455, 'Wild Wild West (1999)'),
(-0.6921251, 'Mighty Morphin Power Rangers: The Movie (1995)'),
(-0.6649571, 'Super Mario Bros. (1993)'),
(-0.655268, 'Batman & Robin (1997)'),
(-0.63718784, 'Haunting, The (1999)'),
(-0.59907967, 'Flintstones, The (1994)'),
(-0.59654623, 'Superman III (1983)'),
(-0.58483046, 'Congo (1995)'),
(-0.5782997, 'Showgirls (1995)'),
(-0.57199323, 'Little Nicky (2000)'),
(-0.5705105, 'Message in a Bottle (1999)')]
In [32]:
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
Out[32]:
[(1.3187655, 'Shawshank Redemption, The (1994)'),
(1.0735388, 'Godfather, The (1972)'),
(1.0717344, 'Usual Suspects, The (1995)'),
(0.9121452, "Schindler's List (1993)"),
(0.903625, 'To Kill a Mockingbird (1962)'),
(0.8940818, 'Pulp Fiction (1994)'),
(0.89336175, 'Fargo (1996)'),
(0.887614, 'Matrix, The (1999)'),
(0.8801452, 'Silence of the Lambs, The (1991)'),
(0.8669827, 'Godfather: Part II, The (1974)'),
(0.8619761, 'Star Wars: Episode IV - A New Hope (1977)'),
(0.852508, 'Forrest Gump (1994)'),
(0.84972376, 'Dark Knight, The (2008)'),
(0.84826905, '12 Angry Men (1957)'),
(0.8375876, 'Rear Window (1954)')]
### Interpreting embedding matrices¶
In [36]:
movie_embeddings = to_np(model.i(V(top_movie_idx)))
movie_embeddings.shape
Out[36]:
(3000, 50)
It's hard to interpret 50 different factors. We use Principle Component Analysis (PCA) to simplify them down to 3 vectors.
PCA essentially says, reduce our dimensionality down to $n$. It finds 3 linear combinations of our 50 embedding dimensions whic capture as much variation as possible, while also making those 3 linear combinations as different to each other as possible.
In [37]:
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_embeddings.T).components_
In [38]:
movie_pca.shape
Out[38]:
(3, 3000)
In [40]:
factor0 = movie_pca[0]
movie_component = [(factor, movie_names[i]) for factor,i in zip(factor0, top_movies)]
In [42]:
# Looking at the first component, it looks like it's something like classier movies vs. more lighthearted
sorted(movie_component, key=itemgetter(0), reverse=True)[:10]
Out[42]:
[(0.08261366, 'Independence Day (a.k.a. ID4) (1996)'),
(0.060704462, 'Armageddon (1998)'),
(0.057128586, 'Lost World: Jurassic Park, The (1997)'),
(0.05646295, "Charlie's Angels (2000)"),
(0.05587433, 'X-Men (2000)'),
(0.054502834, 'Grumpier Old Men (1995)'),
(0.053873993, 'Pearl Harbor (2001)'),
(0.05382658, 'Police Academy 4: Citizens on Patrol (1987)'),
(0.052748825, 'Miss Congeniality (2000)'),
(0.049758486, 'Waterworld (1995)')]
In [43]:
sorted(movie_component, key=itemgetter(0))[:10]
Out[43]:
[(-0.072085, 'Taxi Driver (1976)'),
(-0.070109025, 'Fargo (1996)'),
(-0.06869264, 'Chinatown (1974)'),
(-0.06718892, 'Godfather, The (1972)'),
(-0.06630573, 'Apocalypse Now (1979)'),
(-0.06497336, 'Pulp Fiction (1994)'),
(-0.0637139, 'Casablanca (1942)'),
(-0.061072655, 'Goodfellas (1990)'),
(-0.059540763, 'Shining, The (1980)'),
(-0.058864854, 'Maltese Falcon, The (1941)')]
In [45]:
factor1 = movie_pca[1]
movie_component = [(factor, movie_names[i]) for factor,i in zip(factor1, top_movies)]
In [47]:
# Looking at the second component, it looks more like CGI vs dialogue-driven
sorted(movie_component, key=itemgetter(0), reverse=True)[:10]
Out[47]:
[(0.065984644, 'Mission to Mars (2000)'),
(0.060737364, 'Island of Dr. Moreau, The (1996)'),
(0.0543456, 'Tank Girl (1995)'),
(0.05323038, 'Batman & Robin (1997)'),
(0.050556783, "Joe's Apartment (1996)"),
(0.04883899, 'Showgirls (1995)'),
(0.04720561, 'Catwoman (2004)'),
(0.046349775, 'Piano, The (1993)'),
(0.04605449, 'Bringing Up Baby (1938)')]
In [48]:
sorted(movie_component, key=itemgetter(0))[:10]
Out[48]:
[(-0.13760568, 'Lord of the Rings: The Return of the King, The (2003)'),
(-0.13532887, 'Lord of the Rings: The Fellowship of the Ring, The (2001)'),
(-0.12333063, 'Lord of the Rings: The Two Towers, The (2002)'),
(-0.103825174, 'Star Wars: Episode VI - Return of the Jedi (1983)'),
(-0.09216414, 'Lethal Weapon (1987)'),
(-0.09151409, 'Jurassic Park (1993)'),
(-0.090675056, 'Spider-Man (2002)'),
(-0.08928624,
'Raiders of the Lost Ark (Indiana Jones and the Raiders of the Lost Ark) (1981)'),
(-0.08467902, 'Die Hard (1988)'),
(-0.083290756, 'X2: X-Men United (2003)')]
In [49]:
# We can map these two components against each other
idxs = np.random.choice(len(top_movies), 50, replace=False)
X = factor0[idxs]
Y = factor1[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,movie_names[i], color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
## Collbarative Filtering from scratch¶
In this section, we'll look at implementing collaborative filtering from scratch.
In [1]:
# Do our imports again in case we want to run from here
%matplotlib inline
from fastai.learner import *
from fastai.column_data import *
In [2]:
# Then set up the path
path = "data/ml-latest-small/"
In [17]:
ratings = pd.read_csv(path+'ratings.csv')
Out[17]:
userId movieId rating timestamp
0 1 31 2.5 1260759144
1 1 1029 3.0 1260759179
2 1 1061 3.0 1260759182
3 1 1129 2.0 1260759185
4 1 1172 4.0 1260759205
## PyTorch Arithmetic¶
In [3]:
# Declare tensors (n-dimensonal matrices)
a = T([[1.,2],
[3,4]])
b = T([[2.,2],
[10,10]])
a,b
Out[3]:
(
1 2
3 4
[torch.cuda.FloatTensor of size 2x2 (GPU 0)],
2 2
10 10
[torch.cuda.FloatTensor of size 2x2 (GPU 0)])
In [7]:
# Element-wise multiplication
a*b
Out[7]:
2 4
30 40
[torch.cuda.FloatTensor of size 2x2 (GPU 0)]
### CUDA¶
To run on the graphics card, add .cuda() to the end of PyTorch calls. Otherwise they will run on the CPU.
In [8]:
# This is running on the GPU
a*b.cuda()
Out[8]:
2 4
30 40
[torch.cuda.FloatTensor of size 2x2 (GPU 0)]
In [9]:
# Element-wise multiplication and sum across the columns
# This is the tensor dot product.
# I.e., the dot product of [1,2] and [2,2] = 6, and [3,4]*[10,10] = 70
(a*b).sum(1)
Out[9]:
6
70
[torch.cuda.FloatTensor of size 2 (GPU 0)]
## PyTorch Modules¶
We can build our own neural network layer to process inputs and compute activations.
In PyTorch, we call this a module. I.e., we are going to build a PyTorch module. Modules can be passed in to neural nets.
PyTorch modules are derived from nn.Module (neural network module).
Modules must contain a function called forward that will compute the forward activations -- do the forward pass.
This forward function is called automatically when the module is called with its constructor, i.e., module(a,b) will call forward(a,b).
In [11]:
# We can create a module that does dot products between tensors
class DotProduct(nn.Module):
def forward(self, users, movies):
return (users*movies).sum(1)
In [12]:
model = DotProduct()
In [14]:
# This will call the forward function.
model(a, b)
Out[14]:
6
70
[torch.cuda.FloatTensor of size 2 (GPU 0)]
### A more complex module/fixing up index values¶
Now, let's create a more complex module to do the work we were doing in our spreadsheet.
But first, we have a slightly problem: user and movie IDs are not contiguous. For example, our user ID might jump from 1000 to 1400. This means that if we want to do direct indexing via the ID, we would need to have those extra 400 rows in our tensor. So we'll do some data fixing to map a series of sequential, contiguous IDs.
In [23]:
# Get the unique user IDs
unique_users = ratings.userId.unique()
# Get a list of sequential IDs using enumerate
user_to_index = {o:i for i,o in enumerate(unique_users)}
# Map the userIds in ratings using user_to_index
ratings.userId = ratings.userId.apply(lambda x: user_to_index[x])
In [24]:
# Do the same for movie IDs
unique_movies = ratings.movieId.unique()
movie_to_index = {o:i for i,o in enumerate(unique_movies)}
ratings.movieId = ratings.movieId.apply(lambda x: movie_to_index[x])
In [26]:
number_of_users = int(ratings.userId.nunique())
number_of_movies = int(ratings.movieId.nunique())
number_of_users, number_of_movies
Out[26]:
(671, 9066)
## Creating the module¶
Now let's create our module. This will be a module that holds an embedding matrix for our users and movies. The forward pass will do a dot product on them.
The module will use nn.Embedding to create the embedding matrices. These are PyTorch variables. Variables support all the operations that tensors do, except they also support automatic differentiation.
When we want to access the tensor part of the variable, we call .weight.data on the variable.
If we put _ at the end of a PyTorch tensor function, it performs the operation in place.
To initialize our embedding matrices to random numbers using values calculated using He initialization. (See PyTorch's kaiming_uniform which can do He initialization too link.)
The flow of the module will be like this:
1. Look up the factors for the users from the embedding matrix
2. Look up the factors for the movies from the embedding matrix
3. Take the dot product
In [46]:
number_of_factors = 50
In [59]:
class EmbeddingNet(nn.Module):
def __init__(self, number_of_users, number_of_movies):
super().__init__()
# Create embedding matrices for users and movies
self.user_embedding_matrix = nn.Embedding(number_of_users, number_of_factors)
self.movie_embedding_matrix = nn.Embedding(number_of_movies, number_of_factors)
# Initialize the embedding matrices
# .weight.data gets the tensor part of the variable
# Using _ performs the operation in place
self.user_embedding_matrix.weight.data.uniform_(0,0.05)
self.movie_embedding_matrix.weight.data.uniform_(0,0.05)
# Foward pass
# As with our structured data example, we can take in categorical and continuous variables
# (But both our users and movies are categorical)
def forward(self, categorical, continuous):
# Get the users and movies params
users,movies = categorical[:,0],categorical[:,1]
# Get the factors from our embedding matrices
user_factors,movie_factors = self.user_embedding_matrix(users), self.movie_embedding_matrix(movies)
# Take the dot product
return (user_factors*movie_factors).sum(1)
In [60]:
# Now we want to set up our x and y for our crosstab
# X = everything except rating and timestamp (row/column for our cross tab)
# Y = ratings (result in our cross tab)
x = ratings.drop(['rating', 'timestamp'],axis=1)
y = ratings['rating'].astype('float32')
In [61]:
x.head()
Out[61]:
userId movieId
0 0 0
1 0 1
2 0 2
3 0 3
4 0 4
In [62]:
y.head()
Out[62]:
0 2.5
1 3.0
2 3.0
3 2.0
4 4.0
Name: rating, dtype: float32
In [63]:
val_idxs = get_cv_idxs(len(ratings))
In [64]:
# Just use fast.ai to set up the dataloader
data = ColumnarModelData.from_data_frame(path, val_idxs, x, y, ['userId', 'movieId'], 64)
In [71]:
weight_decay=1e-5
model = EmbeddingNet(number_of_users, number_of_movies).cuda()
# optim creates the optimization function
# model.parameters() fetches the weights from the nn.Module superclass (anything of type nn.[weight type] e.g. Embedding)
opt = optim.SGD(model.parameters(), 1e-1, weight_decay=weight_decay, momentum=0.9)
In [72]:
# Call the PyTorch training loop (we'll write our own later on)
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 1.649158 1.637204
1 1.117915 1.309114
2 0.903568 1.219225
Out[72]:
[1.2192254]
We can see that our loss is still quite high.
We can manually do some learning rate annealing and call fit again.
In [73]:
set_lrs(opt, 0.01)
In [74]:
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 0.685637 1.1429
1 0.694845 1.133847
2 0.700296 1.129204
Out[74]:
[1.1292036]
## Bias¶
Our loss still doesn't compete with the fast.ai library. One reason for this is lack of bias.
Consider, one movie tends to have particularly high ratings, or a certain user tends to give low scores to movies. We want to account for these case-by-case variances. So we give each movie and user a bias and add them on to our dot product. In practice, this will be like a an extra row stuck on to our movie and user tensors.
So now we will create a new model that takes bias into account.
This will have a few other differences:
1. It uses a convenience method to create embeddings
2. It normalizes scores returns from the forward pass to 1-5
This second step is not strictly necessary, but it will make it easier to fit parameters.
The sigmoid function is called from F, which is PyTorch's functional library.
In [77]:
# For step 2, score normalizing
min_rating, max_rating = ratings.rating.min(), ratings.rating.max()
min_rating, max_rating
Out[77]:
(0.5, 5.0)
In [91]:
# number_of_inputs = rows in the embedding matrix
# number_of_factors = columns in the embedding matrix
def get_embedding(number_of_inputs, number_of_factors):
embedding = nn.Embedding(number_of_inputs, number_of_factors)
embedding.weight.data.uniform_(-0.01, 0.01)
return embedding
class EmbeddingDotBias(nn.Module):
def __init__(self, number_of_users, number_of_movies):
super().__init__()
# Initialize embedding matrices and bias vectors
(self.user_embedding_matrix, self.movie_embedding_matrix, self.user_biases, self.movie_biases) = [get_embedding(*o) for o in [
(number_of_users, number_of_factors), (number_of_movies, number_of_factors), (number_of_users, 1), (number_of_movies, 1)
]]
def forward(self, categorical, continuous):
users, movies = categorical[:,0], categorical[:,1]
# Do our dot product
user_dot_movies = (self.user_embedding_matrix(users)*self.movie_embedding_matrix(movies)).sum(1)
# Add on our bias vectors
results = user_dot_movies + self.user_biases(users).squeeze() + self.movie_biases(movies).squeeze()
# Normalize results
results = F.sigmoid(results) * (max_rating-min_rating)+min_rating
return results
In [92]:
cf = CollabFilterDataset.from_csv(path, 'ratings.csv', 'userId', 'movieId', 'rating')
weight_decay=2e-4
model = EmbeddingDotBias(cf.n_users, cf.n_items).cuda()
opt = optim.SGD(model.parameters(), 1e-1, weight_decay=weight_decay, momentum=0.9)
In [93]:
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 0.832861 0.836411
1 0.805658 0.817018
2 0.789209 0.810872
Out[93]:
[0.8108725]
In [94]:
set_lrs(opt, 1e-2)
In [95]:
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 0.733431 0.802443
1 0.726335 0.800945
2 0.756487 0.800443
Out[95]:
[0.800443]
## Mini neural net¶
Now, we could take our user and movie embedding values, stick them together, and feed them into a linear layer, effectively creating a neural network.
To create linear layers, we will use the PyTorch nn.Linear class. Note, this class already has biases built into it, so there is no need for separate bias vectors.
In [106]:
class EmbeddingNet(nn.Module):
def __init__(self, number_of_users, number_of_movies, number_hidden_activations=10, p1=0.05, p2=0.5):
super().__init__()
# Set up our embedding layers
(self.user_embedding_matrix, self.movie_embedding_matrix) = [get_embedding(*o) for o in [
(number_of_users, number_of_factors), (number_of_movies, number_of_factors)
]]
# Set up the first linear layer. Since we are sticking together our users and movies, *2
self.linear_layer_1 = nn.Linear(number_of_factors*2, number_hidden_activations)
# Set up second linear layer, which will give the output
self.linear_layer_2 = nn.Linear(number_hidden_activations, 1)
self.dropout1 = nn.Dropout(p1)
self.dropout2 = nn.Dropout(p2)
def forward(self, categorical, continuous):
users, movies = categorical[:,0], categorical[:,1]
# Now, first we get the values from our embedding matrix, and concatenate the columns (dim=1)
# and then run dropout on them
x = self.dropout1(torch.cat([self.user_embedding_matrix(users),self.movie_embedding_matrix(movies)], dim=1))
# Next, feed this into our first linear layer, run it through ReLU, and perform dropout
x = self.dropout2(F.relu(self.linear_layer_1(x)))
# Lastly, we feed it into our second linear layer, run it through sigmoid and normalize
# Linear output function
return F.sigmoid(self.linear_layer_2(x)) * (max_rating-min_rating+1) + min_rating-0.5
In [107]:
weight_decay=1e-5
model = EmbeddingNet(number_of_users, number_of_movies).cuda()
### Note on fit.¶
When calling fit, we pass it a loss/cost function that it can use to measure the success of the function with.
E.g., F.mse_loss.
In [108]:
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 0.88798 0.817012
1 0.79681 0.796811
2 0.802571 0.79135
Out[108]:
[0.79135036]
In [109]:
set_lrs(opt, 1e-3)
In [110]:
fit(model, data, 3, opt, F.mse_loss)
epoch trn_loss val_loss
0 0.778022 0.789235
1 0.761803 0.789287
2 0.765764 0.794108
Out[110]:
[0.7941082]
|
{}
|
# TIME-SPLITTING SCHEMES AND MEASURE SOURCE TERMS FOR A QUASILINEAR RELAXING SYSTEM
TIME-SPLITTING SCHEMES AND MEASURE SOURCE TERMS FOR A QUASILINEAR RELAXING SYSTEM - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
1 IAC - Istituto per le Applicazioni del Calcolo -Mauro Picone-
Abstract : Several singular limits are investigated in the context of a $2 \times 2$ system arising for instance in the modeling of chromatographic processes. In particular, we focus on the case where the relaxation term and a $L^2$ projection operator are concentrated on a discrete lattice by means of Dirac measures. This formulation allows to study more easily some time-splitting numerical schemes.
Keywords : Chromatography relaxation schemes nonconservative products conservation laws
Autor: Laurent Gosse -
Fuente: https://hal.archives-ouvertes.fr/
DESCARGAR PDF
|
{}
|
# If $\mu(X)<\infty$ then $f \in L^1(\mu)$ and $\int f_n \to \int f$ - An exercise question
Fix a measure space $(X,\mathcal{M},\mu)$. Suppose $\{f_n\} \subset L^1$ and $f_n \to f$ uniformly. Then I want to prove the following statement:
If $\mu(X)<\infty$ then $f \in L^1(\mu)$ and $\int f_n \to \int f$.
In some lecture notes, the attempted solution is as follows (I rewrite it in my language):
Since $f_n \to f$ uniformly then there exists $N$ such that $\left\lvert f_n(x) - f(x)\right\rvert < 1$ for all $n \geq N$ and all $x \in X$. Therefore, $$\int \left\lvert f \right\rvert \leq \int \left\lvert f - f_N \right\rvert + \int \left\lvert f_N \right\rvert \leq \int 1+ \int \left\lvert f_N \right\rvert = \mu(X) + \int \left\lvert f_N \right\rvert < \infty$$
so $f \in L^1$.
Then main point that I do not understand here is that, how can we say that $\mu(X) + \int \left\lvert f_N \right\rvert < \infty$? $\mu(X) < \infty$ by definition, however I couldn't see why $\int \left\lvert f_N \right\rvert < \infty$. Is this solution makes sense? If so, can anyone explain this? Thanks!
-
$f_N$ is integrable, isn't it? – Davide Giraudo Jan 11 '13 at 21:22
$f_N\in L^1(\mu)$, isn't it? – Ilya Jan 11 '13 at 21:22
@DavideGiraudo: it's even funny, that our comments with 8 seconds difference are equivalent, and still different :) – Ilya Jan 11 '13 at 21:23
@John: since you are done so fast, what if $\mu(X) = \infty$? – Ilya Jan 11 '13 at 21:34
Then I guess these conditions fail. I will try to construct counterexamples. – Mark Jan 11 '13 at 21:54
show 1 more comment
$$\left|\int_X f_n\,\mathrm d\mu-\int_X f\,\mathrm d\mu\right|\leqslant\int_X|f_n-f|\,\mathrm d\mu\leqslant\mu(X)\cdot\|f-f_n\|_\infty$$
|
{}
|
## 9.8 Answers to the Problem Set
Equation (9.10) is $M-E_{ex}=\frac{k_b}{d_b}(T_b-T_s)$
Equation (9.11) is $M-E_{ex}-E_{sw}=\frac{k_f}{d_f}(T_s-T_r)$
$$k_b = 0.205Wm^{-1} {}^\circ C^{-1}$$ and $$k_f = 0.025Wm^{-1} {}^\circ C^{-1}$$. From this information maximum and minimum values of ($$T_b - T_s$$) and ($$T_s - T_r$$) can be calculated. These are tabulated below. The results are plotted on the two figures (Porter and Gates 1969, Figures 5 and 6, p. 231).
($$T_s-T_r$$) ($$T_b-T_s$$)
Maximum Minimum Maximum Minimum
Shrew 47.5 7.0 1.9 0.6
Cow, summer 18.8 7.4 6.8 6.5
Cow, winter 101.5 40.0 6.8 6.5
Pig 14.6 0 21.0 0.1
Zebra finch 26.6 0 0.9 0
Locust 2.9 0
Cardinal 61.7 0 1.0 0
Jack rabbit 40.8 0 0.7 0
Fence lizard 0.3
Ranking Total $$\Delta T$$ Ranking
Maximum $$T_b-T_r$$ Maximum Minimum Minimum $$T_b-T_r$$
Shrew 3 49.4 7.6 5
Cow, summer 6 25.6 14.0 6
Cow, winter 1 108.3 46.5 7
Pig 5 35.6 1.1 4
Zebra finch 7 27.5 0
Cardinal 2 62.7 0
Jack rabbit 4 41.5 0
If the environment is cold the animal will have to make this difference large to maintain $$T_b$$. It is also possible that the surface temperature will be hotter than the body temperature. Fleece on sheep can protect them from getting too hot (Hatheway 1977). Making the difference small decreases the rate of heat transfer within the body.
Many marine mammals have thick layers of fat. Fat is an economical way to store energy as well as provide insulation. It allows the animal to smooth out its form which should teduce the friction losses due to drag when swimming. Some marine mammals are covered with fur. These animals all spend time in terrestrial habitats (seals, sea lions, otter), which seems to indicate that fur is an important adaptation on land. The fur can also provide a boundary layer of air in the water which helps to cut down on heat loss. The relative efficiency of fat to air as insulation material can be computed by comparing the ratio of the conductivities. From the text we have $\frac{k_b}{k_f}=\frac{0.205}{0.025}=8.2$ The conductivity of fat is 8.2 times greater than that of air. Therefore, to receive the same resistance to heat flow, an animal would have to have 8.2 times the thickness of fat. The lizard can only maintain a maximum 0.5 °C difference between its skin and body temperature.
1. The first step is to calculate $$T_r$$ using Equation (9.13). This yields:
$T_r ^{\circ} C$
I II III IV V
Pig 0.30 5.9 37.6 41.8
Jack rabbit -4.0 18.7 27.5 32.3 43.7
Locust 39.1 20.0 20.0 1.0
$\varepsilon\omega T_r^4 \mbox{ values } (\varepsilon = 0.96)$
I II III IV V
Pig 304.6 378.0 507.6 535.6
Jack rabbit 285.9 394.9 444.7 473.8 548.6
Locust 517.8 402.0 402.0 307.5
Pig Jack rabbit Locust
$$h_c$$, convection coefficient 9.12 15.72 43.21
$$M_b$$, mass 120 kg 2 kg 0.001 kg
$$h_c = 17.24V^{0.6}{M_b^{-0.133}}$$ $$V =1.0ms^{-1}$$
$$h_c$$ will be the slope of the line so all we need to do is find one pair ($$Q_a$$, $$T_a$$) for each set of conditions given in Table B such that
$Q_a + M = \varepsilon\omega T_r^4+E_{sw}+E_{ex}+h_c(T_r-T_a)$ Assume $$T_a=T_r$$ in each case so that the convection is zero. Therefore $Q_a=\varepsilon\omega T_r^4+E_{sw}+E_{ex}-M$
$Q_a \mbox{ values}$
I II III IV V
Pig 183 300 512 560
Jack rabbit 218 559 414 449 549
Locust -68 -184 402 307.5
1. The conduction term is $G=\frac{k}{x}(T_r-T_g)$ where $$k$$ is the conductivity (W m-2°C-1),
$$T_r$$ is the surface temperature(°C),
$$T_g$$ is the ground temperature(°C),
and $$x$$ is the thickness of the layer ($$m$$)
The energy balance is $Q_a + M = \varepsilon\omega T_r^4 + E_{sw} + E_{ex} + h_c(T_r - T_a) + k(T_r - T_g)$ If $$T_r > T_g$$, $$G$$ is positive and the animal will receive more energy. This will shift the climate space to the right. If $$T_g > T_r$$, the opposite shift will occur.
1. Combinations of low air temperature and high radiation do not occur naturally. The coldest air temperatures will occur at night. By similar reasoning the area labeled B in Figure 9.6 suggests that the highest air temperatures will occur under low radiation levels. Again, this will not be true. Figure 9.3 of the text shows that this is the case. Other data presented in the Morhardt and Gates paper confirm these observations. The reason is simply that the sun heats the air.
2. The two formulae needed to calculate the convection coefficients are (9.6) and (9.7). $h_{c1}=0.927V^{0.33}D^{-0.67}\;\;\;\;\;\;\;\mbox{ Porter and Gates}$ $h_{c2}=17.24V^{0.60}M_b^{-0.133}\;\;\;\;\;\;\;\mbox{ Mitchell}$
$$h_{c1}$$ $$h_{c2}$$ $$h_{c2}/h_{c1}$$
Sheep 1.712 9.78 5.7
Cardinal 6.90 29.04 4.2
Lizard 16.67 25.70 1.54
Shrew 14.76 31.85 2.2
Mitchell’s $$h_c$$
0.00001 0.001 0.1 1.01 10.0 160.0 $$M_b$$
$$V$$ $$V^{0.33}$$ 4.640 2.511 1.359 1.0 0.736 0.398 $$M_b^{-0.1333}$$
0.1 0.251 20.08 10.86 5.88 4.32 3.18 1.72
0.5 0.660 52.80 28.57 15.46 11.38 8.37 4.53
1.0 1.0 79.99 43.30 23.43 17.24 12.69 6.66
3.0 1.933 154.63 83.68 46.29 33.32 24.53 13.26
10.0 3.981 318.45 172.34 93.27 68.63 50.51 27.32
Porter and Gates’ $$h_c$$
0.00001 0.001 0.1 1.01 10.0 160.0 $$M_b$$
0.0022 0.01 0.0464 0.10 0.215 1.00 $$D$$
$$V$$ $$V^{0.333}$$ 59.95 21.54 7.743 4.641 2.783 1.00 $$D^{-0.667}$$
0.1 0.464 25.80 9.27 3.33 2.00 1.20 0.43
0.5 0.794 44.14 15.86 5.76 3.42 2.05 0.74
1.0 1.0 55.60 19.97 7.18 4.30 2.58 0.92
3.0 1.442 80.17 28.80 10.35 6.20 3.72 1.33
10.0 2.154 119.75 43.03 15.47 9.27 5.56 1.99
Let $$D=L=(\frac{M_b}{\rho})^{1/3}$$ from Appendix II, $$\rho = 1 \times 10^3 kg\;m^{-3}$$ then $\frac{h_{c2}}{h_{c1}}=\frac{17.24V^{0.60}M_b^{-0.133}}{0.927V^{0.3}[(\frac{M_b}{\rho})^{1/3}]^{-2/3}}=4.01V^{0.27}M_b^{0.089}$
Ratio of $$hc_2/hc_1$$
$$V$$ m s-1 $$M_b$$ kg 0.0001 0.001 0.1 1.0 10.0 1000
0.1 0.801 1.17 1.77 2.16 2.65 4.00
0.5 1.20 1.80 2.71 3.33 4.08 6.12
1.0 1.41 2.17 3.26 4.04 4.92 7.46
3.0 1.93 2.91 4.38 5.37 6.59 9.97
10.0 2.66 4.01 6.03 7.40 9.08 13.75
1. The heat flow by conduction is $$$q=-kA\frac{dT}{dx} \tag{9.15}$$$
where $$q$$ is heat flow (W)
$$k$$ is the thermal conductivity (W m °C-1)
$$A$$ is the area perpendicular to the heat flow (m2)
and $$\frac{dT}{dx}$$ is the temperature gradient (°C m-1).
For a slab under steady state conditions we can separate variables and integrate equation (9.15).
$dT=-\frac{q_s}{kA}dx$ $\int^{T_0}_{T_i}dt=-\frac{q_s}{kA}\int^x_0dx$ $T_0-T_i=-\frac{q_s}{kA}x$ $$$q_s=\frac{kA}{x}(T_i-T_0) \tag{9.16}$$$
For a cylinder we have
\begin{align} q_c &= -kA\frac{dT}{dr} \notag \\ A &= 2\pi rL \notag \\ q_c &= -k 2\pi rL \frac{dT}{dr} \notag \\ dT &= -\frac{q_c}{2\pi Lk}\frac{dr}{r} \notag \\ \int^{T_0}_{T_i}dT &= -\frac{q_c}{2\pi Lk}\int^{r_0}_{r_i}\frac{dr}{r} \notag \\ T_0 - T_i &= \frac{-q}{2\pi Lk}\ln(\frac{r_0}{r_i}) \notag \\ q_c &= \frac{2\pi Lk}{\ln(\frac{r_0}{r_1})}(T_i-T_0) \tag{9.17} \end{align}
Now if we assume the area of the slab is equal to $$L\times2\pi\frac{r_i+r_0}{2}$$ and the thickness to be $$r_0-r_i$$ we can set the two equations (9.16) and (9.17) equal.
$\frac{kL2\pi\frac{r_i+r_0}{2}(T_i-T_0)}{r_0-r_i}\stackrel{?}{=}\frac{2\pi Lk(T_i-T_0)}{\ln(\frac{r_0}{r_i})}$
Cancelling terms, we have $\ln(\frac{r_0}{r_i})\stackrel{?}{=}\frac{2(r_0-r_i)}{r_i+r_0}$ Therefore $$q_s$$ will equal $$q_c$$ if the above relationship is true. Using the data given in the problem we can compute the relative radii.
Desert iguana 0.75 0.75 0.65
Shrew 0.90 0.60 0.50
Zebra Finch 1.25 0.90 0.80
Cardinal 2.5 1.5 1.3
Sheep 25.8 12.5 11.85
Sheep 20.7 12.5 11.85
Pig 18.0 17.7 14.2
Jack Rabbit 5 3.5 3.3
Checking these values we find that for the sheep when $$r_0=25.8$$ and $$r_i=12.5$$ then $\frac{2(r_0-r_i)}{r_0+r_i}=0.668$ and $\ln(\frac{r_0}{r_i})=0.724$ This is the worst case for the data which is less than a 10% difference. You can check the error by plotting $\frac{ln(\frac{r_0}{r_i})-\frac{2(r_0-r_i)}{r_0+r_i}}{\frac{2(r_0-r_i)}{r_0+r_i}} \;\;\mbox{vs.} \;\;\frac{r_0}{r_i}$ The intuitive reason this works is that there is not much change in area for the different pairs of radii we have examined. It is, however, possible to show that
$$$\ln(x)=2[\frac{x-1}{x+1}+\frac{1}{3}(\frac{x-1}{x+1})^3+\frac{1}{5}(\frac{x-1}{x+1})^5+...] \tag{9.18}$$$
This is done by adding the series expansion for $$-\ln(1-y)$$ and $$\ln(1+y)$$ and then letting $$y=\frac{x-1}{x+1}$$. The result will give equation(9.18). Using only the first term of expansion in equation (9.18) with $$x=\frac{r_0}{r_i}$$ we get $\ln(\frac{r_0}{r_i}) = \frac{2(r_0-r_i)}{r_0+r_i}$
1. On the figures the authors give equations to relate metabolic rate and evaporative water loss. These are
\begin{aligned} &\left. \begin{array}{l} M=0.2470-0.0064T_E, \;M(cal\;cm^2\;min^{-1}) \\ M=172-4.47T_E, \;M(W\;m^{-2}) \end{array} \right\} 5 < T_E < 27 ^\circ C \\ &\;M=51\;W\;m^{-2} \;\;\;\;27 < T_E < 35 ^\circ C \\ &\;E=0.00808e^{0.03771T_E},\;E(cal\;cm^{-2}\;min^{-1}) \\ &\;E=5.64e^{0.03771T_E},\;E(W\;m^{-2}) \\ \end{aligned}
$$T_E$$ 5 10 15 20 25 30 35
$$E$$ 6.8 8.2 9.9 12.0 14.5 17.5 21.1
$$M$$ 149.7 127.3 105.0 82.6 60.3 51 51
Assume an animal weight of 200 mg which = 0.2kg.
\begin{align*} h_c &= 17.24V^{0.6}M_b^{-0.1333} \\ h_c &= 5.36,\;\;21.36,\;\;56.1 \\ V &= 0.01,\;\;1.0,\;\;5.0\;\;ms^{-1} \end{align*}
The resulting climate space diagram is given below.
If the animal were simply resting outside, to reduce its metabolism to the lowest levels. it should be active from 1100 to 1500 hours. The reason to go above ground, however, is for activity. Therefore, if metabolic rate increases 1.5 times, preferred activity times should shift to 800-1000 or 1500-1600 hours. It is hard to say much about water loss when the animal is active. According to the figure, if $$M$$ increases to $$82 W m^{-2}$$ then $$E$$ drops. But if respiration rate increases with activity then $$E$$ may also increase. Morhardt and Gates considered a wide variety of above-ground habitats. A shaded environment gave much lower radiation loads during the day. If the animal orients its body parallel to the sun, this lowers $$Q_a$$ also. A great deal more could have been said about the thermoregulation strategies if the thermal environment of the burrows were monitored and if microhabitat usage and body temperature as a function of time of day had been recorded. One would predict that shaded environments including the burrow would be used more in the middle of the day. To test this, one would have to make hourly observations on microhabitat usage.
|
{}
|
" /> -->
#### All Chapter 2 Marks
12th Standard EM
Reg.No. :
•
•
•
•
•
•
Maths
Time : 01:00:00 Hrs
Total Marks : 96
Answer All The Question:
48 x 2 = 96
1. Find the inverse (if it exists) of the following:
$\left[ \begin{matrix} -2 & 4 \\ 1 & -3 \end{matrix} \right]$
2. Reduce the matrix $\left[ \begin{matrix} 3 & -1 & 2 \\ -6 & 2 & 4 \\ -3 & 1 & 2 \end{matrix} \right]$ to a row-echelon form.
3. Show that the equations 3x + y + 9z = 0, 3x + 2y + 12z = 0 and 2x + y + 7z = 0 have nontrivial solutions also.
4. Solve 6x - 7y = 16, 9x - 5y = 35 using (Cramer's rule).
5. Find z−1, if z=(2+3i)(1− i).
6. Simplify the following:
i -1924+i2018
7. If (cosθ + i sinθ)2 = x + iy, then show that x2+y2 =1
8. Find the argument of -2
9. Solve: (2x-1)(x+3)(x-2)(2x+3)+20=0
10. Show that the polynomial 9x9+2x5-x4-7x2+2 has at least six imaginary roots.
11. If sin ∝, cos ∝ are the roots of the equation ax2 + bx + c-0 (c ≠ 0), then prove that (n + c)2 - b2 + c2
12. Find x If $x=\sqrt { 2+\sqrt { 2+\sqrt { 2+....+upto\infty } } }$
13. Find the principal value of sin-1(2), if it exists.
14. Is cos-1(-x)=$\pi$-cos−1(x) true? Justify your answer.
15. Find the principal value of ${ tan }^{ -1 }\left( \cfrac { -1 }{ \sqrt { 3 } } \right)$
16. If ${ cot }^{ -1 }\left( \cfrac { 1 }{ 7 } \right) =\theta$ find the value of cos $\theta$
17. Find the general equation of the circle whose diameter is the line segment joining the points (−4,−2)and (1,1).
18. Find the vertices, foci for the hyperbola 9x2−16y2=144.
19. Find the locus of a point which divides so that the sum of its.distances from (-4, 0) and (4, 0) is 10 units.
20. Find the equation of the hyperbola whose vertices are (0, ±7) and e = $\frac { 4 }{ 3 }$
21. If $\hat { a } =\hat { -3i } -\hat { j } +\hat { 5k }$$\hat{b}=\hat{i}-\hat{2j}+\hat{k}$$\hat{c}=\hat{4i}-\hat{4k}$and $\hat { a } .(\hat { b } \times \hat { c } )$
22. Find the volume of the parallelepiped whose coterminus edges are given by the vectors $\hat { 2i } -\hat { 3j } +\hat { 4k }$$\hat { i } -\hat { 2j } +\hat { 4k }$ and $\hat {3 i } -\hat { j } +\hat { 2k }$
23. Find the area of the triangle whose vertices are A(3, -1, 2) B(I, -1, -3) and C(4, -3,1)
()
-c
24. Find the Cartesian equation of a.line passing through the pointsA(2, -1, 3) and B(4, 2, 1)
()
-1
25. The temperature in celsius in a long rod of length 10 m, insulated at both ends, is a function of
length x given by T = x(10 − x). Prove that the rate of change of temperature at the midpoint of the
rod is zero.
26. Prove that the function f (x) = x2 + 2 is strictly increasing in the interval (2,7) and strictly decreasing in the interval (−2, 0)
27. Find the point at which the curve y-exy+x=0 has a vertical tangent.
28. Determine the domain of concavity of the curve y=2-x2
29. Let f , g : (a,b)→R be differentiable functions. Show that d(fg) = fdg + gdf
30. If U(x, y, z) = $\frac { { x }^{ 2 }+{ y }^{ 2 } }{ xy } +3{ z }^{ 2 }y$, find $\frac { \partial U }{ \partial x } ;\frac { \partial U }{ \partial y }$ and $\frac { \partial U }{ \partial z }$
31. IF u(x, y) = x2 + 3xy + y2, x, y, ∈ R, find tha linear appraoximation for u at (2, 1)
32. If u=x2+3xy2+y2, then prove that $\cfrac { { \partial }^{ 2 }u }{ \partial x\partial y } =\cfrac { { \partial }^{ 2 }u }{ \partial y\partial x }$
33. Find, by integration, the volume of the solid generated by revolving about the x-axis, the region enclosed by y = e−2x y = 0, x = 0 and x = 1
34. Evaluate $\int _{ 1 }^{ 2 }{ \cfrac { { e }^{ x } }{ 1+{ e }^{ 2x } } dx }$
35. Find the area of the region bounded by the curve y = sin x and the ordinate x=0 $x=\cfrac { \pi }{ 3 }$
36. Find the area bounded by y=x2+2, x-axis, x=1 and x=2.
37. Find value of m so that the function y = emx is a solution of the given differential equation.
y''− 5y' + 6y = 0
38. Determine the order and degree (if exists) of the following differential equations:
$\frac { { d }^{ 2 }y }{ { dx }^{ 2 } } +3{ \left( \frac { dy }{ dx } \right) }^{ 2 }={ x }^{ 2 }log\left( \frac { { d }^{ 2 }y }{ { dx }^{ 2 } } \right)$
39. Find the order and degree of $\left( \cfrac { { d }^{ 2 }y }{ { dx }^{ 2 } } \right) ^{ 2 }+cos\left( \cfrac { dy }{ dx } \right) =0$
40. Form the D.E corresponding to y=emx by eliminating 'm'.
41. An urn contains 5 mangoes and 4 apples Three fruits are taken at randaom If the number of apples taken is a random variable, then find the values of the random variable and number of points in its inverse images.
42. Find the probability mass function and cumulative distribution function of number of girl child in families with 4 children, assuming equal probabilities for boys and girls.
43. Prove that E(aX+b)=aE(X)+b
44. Prove that Var(ax+b)=a2Var(X)
45. Let p: Jupiter is a planet and q: India is an island be any two simple statements. Give
verbal sentence describing each of the following statements.
(i) ¬p
(ii) p ∧ ¬q
(iii) ¬p ∨ q
(iv) p➝ ¬q
(v) p↔q
46. Fill in the following table so that the binary operation ∗ on A = {a,b,c} is commutative.
* a b c a b b c b a c a c
47. Is cross product commutative on the set of vectors? Justify your answer.
48. Form the truth table of (~q)^p.
|
{}
|
# What is the difference between training and testing in reinforcement learning?
In reinforcement learning (RL), what is the difference between training and testing an algorithm/agent? If I understood correctly, testing is also referred to as evaluation.
As I see it, both imply the same procedure: select an action, apply to the environment, get a reward, and next state, and so on. But I've seen that, e.g., the Tensorforce RL framework allows running with or without evaluation.
• As RL is not a supervised algorithm (it's a third type of ML algorithms), you can't the same expectation from testing and training algorithm here, as well as, supervised algorithms. – OmG May 4 at 15:54
• @OmG OK. So, as I understand from you, this concept does not apply to RL? – Cristian M May 4 at 16:03
• Not the same as the supervised learning. – OmG May 4 at 17:31
# What is reinforcement learning?
In reinforcement learning (RL), you typically imagine that there's an agent that interacts, in time steps, with an environment by taking actions. On each time step $$t$$, the agent takes the action $$a_t \in \mathcal{A}$$ in the state $$s_t \in \mathcal{S}$$, receives a reward (or reinforcement) signal $$r_t \in \mathbb{R}$$ from the environment and the agent and the environment move to another state $$s_{t+1} \in \mathcal{S}$$, where $$\mathcal{A}$$ is the action space and $$\mathcal{S}$$ is the state space of the environment, which is typically assumed to be a Markov decision process (MDP).
# What is the goal in RL?
The goal is to find a policy that maximizes the expected return (i.e. a sum of rewards starting from the current time step). The policy that maximizes the expected return is called the optimal policy.
## Policies
A policy is a function that maps states to actions. Intuitively, the policy is the strategy that implements the behavior of the RL agent while interacting with the environment.
A policy can be deterministic or stochastic. A deterministic policy can be denoted as $$\pi : \mathcal{S} \rightarrow \mathcal{A}$$. So, a deterministic policy maps a state $$s$$ to an action $$a$$ with probability $$1$$. A stochastic policy maps states to a probability distribution over actions. A stochastic policy can thus be denoted as $$\pi(a \mid s)$$ to indicate that it is a conditional probability distribution of an action $$a$$ given that the agent is in the state $$s$$.
## Expected return
The expected return can be formally written as
$$\mathbb{E}\left[ G_t \right] = \mathbb{E}\left[ \sum_{i=t+1}^\infty R_i \right]$$
where $$t$$ is the current time step (so we don't care about the past), $$R_i$$ is a random variable that represents the probable reward at time step $$i$$, and $$G_t = \sum_{i=t+1}^\infty R_i$$ is the so-called return (i.e. a sum of future rewards, in this case, starting from time step $$t$$), which is also a random variable.
## Reward function
In this context, the most important job of the human programmer is to define a function $$\mathcal{R}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$$, the reward function, which provides the reinforcement (or reward) signal to the RL agent while interacting with the environment. $$\mathcal{R}$$ will deterministically or stochastically determine the reward that the agent receives every time it takes action $$a$$ in the state $$s$$. The reward function $$R$$ is also part of the environment (i.e. the MDP).
Note that $$\mathcal{R}$$, the reward function, is different from $$R_i$$, which is a random variable that represents the reward at time step $$i$$. However, clearly, the two are very related. In fact, the reward function will determine the actual realizations of the random variables $$R_i$$ and thus of the return $$G_i$$.
## How to estimate the optimal policy?
To estimate the optimal policy, you typically design optimization algorithms.
### Q-learning
The most famous RL algorithm is probably Q-learning, which is also a numerical and iterative algorithm. Q-learning implements the interaction between an RL agent and the environment (described above). More concretely, it attempts to estimate a function that is closely related to the policy and from which the policy can be derived. This function is called the value function, and, in the case of Q-learning, it's a function of the form $$Q : \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$$. The name $$Q$$-learning derives from this function, which is often denoted as $$Q$$.
Q-learning doesn't necessarily find the optimal policy, but there are cases where it is guaranteed to find the optimal policy (but I won't dive into the details).
Of course, I cannot describe all the details of Q-learning in this answer. Just keep in mind that, to estimate a policy, in RL, you will typically use a numerical and iterative optimization algorithm (e.g. Q-learning).
## What is training in RL?
In RL, training (also known as learning) generally refers to the use of RL algorithms, such as Q-learning, to estimate the optimal policy (or a value function)
Of course, as in any other machine learning problem (such as supervised learning), there are many practical considerations related to the implementation of these RL algorithms, such as
• Which RL algorithm to use?
• Which programming language, library, or framework to use?
These and other details (which, of course, I cannot list exhaustively) can actually affect the policy that you obtain. However, the basic goal during the learning or training phase in RL is to find a policy (possibly, optimal, but this is almost never the case).
## What is evaluation (or testing) in RL?
During learning (or training), you may not be able to find the optimal policy, so how can you be sure that the learned policy to solve the actual real-world problem is good enough? This question needs to be answered, ideally before deploying your RL algorithm.
The evaluation phase of an RL algorithm is the assessment of the quality of the learned policy and how much reward the agent obtains if it follows that policy. So, a typical metric that can be used to assess the quality of the policy is to plot the sum of all rewards received so far (i.e. cumulative reward or return) as a function of the number of steps. One RL algorithm dominates another if its plot is consistently above the other. You should note that the evaluation phase can actually occur during the training phase too. Moreover, you could also assess the generalization of your learned policy by evaluating it (as just described) in different (but similar) environments to the training environment [1].
The section 12.6 Evaluating Reinforcement Learning Algorithms of the book Artificial Intelligence: Foundations of Computational Agents (2017) by Poole and Mackworth provides more details about the evaluation phase in reinforcement learning, so you should probably read it.
Apart from evaluating the learned policy, you can also evaluate your RL algorithm, in terms of
• resources used (such as CPU and memory), and/or
• experience/data/samples needed to converge to a certain level of performance (i.e. you can evaluate the data/sample efficiency of your RL algorithm)
• robustness/sensitivity (i.e., how the RL algorithm behaves if you change certain hyper-parameters); this is also important because RL algorithms can be very sensitive (from my experience)
## What is the difference between training and evaluation?
During training, you want to find the policy. During the evaluation, you want to assess the quality of the learned policy (or RL algorithm). You can perform the evaluation even during training.
• Here is another answer that is worth reading too. – nbro Oct 27 at 17:19
# Reinforcement Learning Workflow
The general workflow for using and applying reinforcement learning to solve a task is the following.
1. Create the Environment
2. Define the Reward
3. Create the Agent
4. Train and Validate the Agent
5. Deploy the Policy
# Training
• Training in Reinforcement learning employs a system of rewards and penalties to compel the computer to solve a problem by itself.
• Human involvement is limited to changing the environment and tweaking the system of rewards and penalties.
• As the computer maximizes the reward, it is prone to seeking unexpected ways of doing it.
• Human involvement is focused on preventing it from exploiting the system and motivating the machine to perform the task in the way expected.
• Reinforcement learning is useful when there is no “proper way” to perform a task, yet there are rules the model has to follow to perform its duties correctly.
• Example: By tweaking and seeking the optimal policy for deep reinforcement learning, we built an agent that in just 20 minutes reached a superhuman level in playing Atari games.
• Similar algorithms, in principle, can be used to build AI for an autonomous car.
# Testing
• Debugging RL algorithms is very hard. Everything runs and you are not sure where the problem is.
• To test if it worked well, if the trained agent is good at what it was trained for, you take your trained model and apply it to the situation it is trained for.
• If it’s something like chess or Go, you could benchmark it against other engines (say stockfish for chess) or human players.
• You can also define metrics for performance, ways of measuring the quality of the agent’s decisions.
• In some settings (e.g a Reinforcement Learning Pacman player), the game score literally defines the target outcome, so you can just evaluate your model’s performance based on that metric.
The goal of the reinforcement learning (RL) is to use data obtained via interaction with the environment to solve the underlying Markov Decision Process (MDP). "Solving the MDP" is tantamount to finding the optimal policy (with respect to the MDP's underlying dynamics which are usually assumed to be stationary).
Training is the process of using data in order to find the optimal policy. Testing is the process of evaluating the (final) policy obtained by training.
Note that, since we're generally testing the policy on the same MDP we used for training, the distinction between the training dataset and the testing set is no longer as important as it is the case with say supervised learning. Consequently, classical notions of overfitting and generalization should be approached from a different angle as well.
If you want, you can do training and testing in RL. Exactly the same usage, training for building up a policy, and testing for evaluation.
In supervised learning, if you use test data in training, it is like cheating. You cannot trust the evaluation. That's why we separate train and test data.
The Objective of RL is a little different. RL trying to find the optimal policy. Since RL collects the information by doing, while the agent explores the environment (for more information), there might be lost in the objective function. But, it might be inevitable for a better future gain.
Multi-arm bandit example, If there are 10 slot machines. They will return random amounts of money. They have different expected returns. I want to find the best way to maximize my gain. easy, I have to find the machine with the greatest expected return and use only the machine. How to find the best machine?
If we have a training and testing (periods), For example, I will give you an hour of the training period, so it doesn't matter if you lose or how much you earn. And in the testing period, I will evaluate your performance.
What would you do? In the training period, you will try as much as possible, without considering the performance/gain. And in the testing period, you will use only the best machine you found.
This is not a typical RL situation. RL is trying to find the best way, Learning by doing. All the results while doing are considered.
suppose... I tried all 10 machines once each. And, the No.3 machine gave me the most money. But I am not sure that it is the best machine, because all the machines provide a RANDOM amount. If I keep using the No.3 machine, it might be a good idea, because according to the information so far, it is the best machine. However, You might miss the better machine if you don't try other machines due to randomness. But if you try other machines, you might lose an opportunity to earn more money. What should I do? This is a well-known Exploration and Exploitation trade-off in RL.
RL trying to maximize the gain including the gains right now and the gains in the future. In other words, the performance during training also considered as its performance. That's why RL is not unsupervised nor supervised learning.
However, in some situations, you might want to separate training and testing. RL is designed for an agent who interacts with the environment. However, in some cases, (for example), rather than having an interactive playground, you have data of interactions. The formulation would be a little different in this case.
|
{}
|
# Coefficient of Variation Calculator
## How To Use Coefficient of Variation Calculator
Lets first understand, what is Coefficient of Variation (CV) and how we can calculate Coefficient of Variation?
In probability theory and statistics, the coefficient of variation (CV) is a standardized measure of dispersion of a probability distribution or frequency distribution.
It is often expressed as a percentage, and is defined as the ratio of the standard deviation $$\sigma$$ to the mean $$\mu$$.
Coefficient of Variation Calculation:
Let the set of n terms be x1, x2, x3, x4, x5, ......, xn.
$$Mean(\mu) = \frac{1}{n}\sum_{i=1}^n x_i$$ $$\mu= \frac{1}{n}(x_1 + x_2 + x_3 + ... + x_n)$$ $$\sigma = \sqrt{\frac{1}{n}\sum_{i=1}^n (x_i\hspace{0.1cm}-\hspace{0.1cm}\mu)^2}$$ $$Coefficient\hspace{0.1cm}of\hspace{0.1cm}Variation = \frac{\sigma}{\mu}$$
|
{}
|
# Tagged with #fisherstrand 0 documentation articles | 0 announcements | 7 forum discussions
No articles to display.
No articles to display.
Created 2015-11-25 09:07:33 | Updated | Tags: combinevariants fisherstrand qualbydepth variantannotator
I am doing variant calling of multiple RNAseq datasets using GATK/3.4.46. For limitation of computational resources, I ran HaplotypeCaller on each dataset separately. Then I ran CombineVaraints to merge all output VCF files using this command
java -Xmx10g -jar GenomeAnalysisTK.jar \ -T CombineVariants \ -R $gatk_ref \ --variant set1.vcf \ --variant set2.vcf \ --variant set3.vcf \ -o combine_output.vcf \ -genotypeMergeOptions UNIQUIFY Then I tried to run VariantFiltration using thic command java -Xmx2g -jar$GATK/GenomeAnalysisTK.jar \ -T VariantFiltration \ -R $gatk_ref \ -V combine_output.vcf \ -window 35 -cluster 3 \ -filterName FS -filter "FS > 30.0" \ -filterName QD -filter "QD < 2.0" \ -o$output
Several thousands variants though warning for absence of FS and QD. According to @Sheila advise in http://gatkforums.broadinstitute.org/discussion/2334/undefined-variable-variantfiltration, I ran VariantAnnotator to add these annotations using this command
java -Xmx45g -jar GenomeAnalysisTK.jar \ -R \$gatk_ref \ -T VariantAnnotator \ -I input1.bam \ -I input2.bam \ . . -I input57.bam \ -V combine_output.vcf \ -A Coverage \ -A FisherStrand \ -A QualByDepth \ -nt 7 \ -o combine_output_ann.vcf
Then I repeated the VariantFiltration but I have 2 problems: 1) about 2000 variants are still not annotated for FS. All of them are indels and many of them are not homozygous for the ALT allele). Also ~ 40 variants are still not annotated for QD. All of them have multiple ALT alleles 2) The combined VCF record has the QUAL of the first VCF record with a non-MISSING QUAL value. According to my manual calculations, I think VariantAnnotator calculates the QD value by dividing this QUAL value by the AD of samples with a non hom-ref genotype call. This cause many variants to fail the QD filter.
Thank you
Created 2015-07-23 11:26:42 | Updated | Tags: fisherstrand haplotypecaller downsampling strand-bias
Hi GATK team, Again thanks a lot for the wonderful tools you're offering to the community.
I have recently switched from UnifiedGenotyper to Haplotype Caller (1 sample at a time, DNASeq). I was planning to use the same hard filtering procedure that I was using previously, including the filter of the variants with FS > 60. However I am facing an issue probably due to the downsampling done by HC.
I should have 5000 reads, but DP is around 500/600 which I understood is due to downsampling (even with -dt NONE). I did understand that it does not impact in the calling itself. However it is annoying me for 2 reasons 1) Calculating frequency of the variant using the AD field is not correct (not based on all reads) 2) I get variants with FS >60 whereas when you look at the entire set of reads, there is absolutely no strand bias.
Example with this variant chr17 41245466 rs1799949 G A 7441.77 STRAND_BIAS; AC=1;AF=0.500;AN=2;BaseQRankSum=7.576;DB;DP=1042;FS=63.090;MLEAC=1;MLEAF=0.500;MQ=60.00;MQRankSum=0.666;QD=7.14;ReadPosRankSum=-11.896;SOR=5.810 GT:AD:GQ:PL:SB 0/1:575,258:99:7470,0,21182:424,151,254,4
When I observe all reads I have the following counts, well shared on the + and - strands Allele G : 1389 (874+, 515-) Allele A : 1445 (886+, 559-)
Could you please tell me how to avoid such an issue ? (By the way, this variant is a true one and should not be filtered out).
Thanks a lot.
Created 2015-01-26 11:01:54 | Updated 2015-01-26 11:47:17 | Tags: fisherstrand haplotypecaller jexl strand-bias filtering hardfilters
Hi, I need to apply hard filters to my data. In cases where I have lower coverage I plan to use the Fisher Strand annotation, and in higher coverage variant calls, SOR (using a JEXL expression to switch between them: DP < 20 ? FS > 50.0 : SOR > 3).
The variant call below (some annotations snipped), which is from a genotyped gVCF from HaplotypeCaller (using a BQSR'ed BAM file), looks well supported (high QD, high MQ, zero MQ0). However, there appears to be some strand bias (SOR=3.3):
788.77 . DP=34;FS=5.213;MQ=35.37;MQ0=0;QD=25.44;SOR=3.334 GT:AD:DP:GQ:PL 1/1:2,29:31:35:817,35,0
In this instance the filter example above would be applied.
## My Question
Is this filtering out a true positive? And what kind of cut-offs should I be using for FS and SOR?
The snipped annotations ReadPosRankSum=-1.809 and BaseQRankSum=-0.8440 for this variant also indicate minor bias that the evidence to support this variant call also has some bias (the variant appears near the end of reads in low quality bases, compared to the reads supporting the reference allele).
## My goal
This is part of a larger hard filter I'm applying to a set of genotyped gVCFs called from HaplotypeCaller.
I'm filtering HomRef positions using this JEXL filter:
vc.getGenotype("%sample%").isHomRef() ? ( vc.getGenotype("%sample%").getAD().size == 1 ? (DP < 10) : ( ((DP - MQ0) < 10) || ((MQ0 / (1.0 * DP)) >= 0.1) || MQRankSum > 3.2905 || ReadPosRankSum > 3.2905 || BaseQRankSum > 3.2905 ) ) : false
And filtering HomVar positions using this JEXL:
vc.getGenotype("%sample%").isHomVar() ? ( vc.getGenotype("%sample%").getAD().0 == 0 ? ( ((DP - MQ0) < 10) || ((MQ0 / (1.0 * DP)) >= 0.1) || QD < 5.0 || MQ < 30.0 ) : ( BaseQRankSum < -3.2905 || MQRankSum < -3.2905 || ReadPosRankSum < -3.2905 || (MQ0 / (1.0 * DP)) >= 0.1) || QD < 5.0 || (DP < 20 ? FS > 60.0 : SOR > 3.5) || MQ < 30.0 || QUAL < 100.0 ) ) : false
My goal is true positive variants only and I have high coverage data, so the filtering should be relatively stringent. Unfortunately I don't have a database I could use to apply VQSR, henceforth the comprehensive filtering strategy.
Created 2013-09-20 13:36:22 | Updated | Tags: basequalityranksumtest fisherstrand vcf mqranksum readposranksum
Hey guys,
im struggeling with some statistics given by the vcf file: the Ranksumtests. I started googleing arround, but that turned out to be not helpfult for understanding it (in may case). I really have no idea how to interprete the vcf-statistic-values comming from ranksumtest. I have no clue whether a negative, positive or value near zero is good/bad. Therefore im asking for some help here. Maybe someone knows a good tutorial-page or can give me a hint to better understand the values of MQRankSum, ReadPosRankSum and BaseQRankSum. I have the same problem with the FisherStrand statistics. Many, many thanks in advance.
Created 2013-09-05 21:36:20 | Updated 2013-09-05 21:37:20 | Tags: unifiedgenotyper fisherstrand
Hello,
I have the following variant called by Unified Genotyper (GATK version : GenomeAnalysisTK-2.6-5) :
The FS score is 37.414. But a closer look at the bam file indicates that the 115 reads supporting alternate allele G are all in + strand. Shouldn't the FS score be much higher for this variant? 113 reads reads supporting the reference allele T at this position are in + strand and 67 are in - strand.
Created 2013-01-04 16:58:20 | Updated 2013-01-07 19:13:17 | Tags: fisherstrand indels
I am filtering looking for rare variants and found some frameshift variants in an interesting gene. Some of them are noted as PASS in the QC column of the VCF and some are noted as Indel_FS . What exactly does that second notation mean? I am almost positive that these will validate given how they segregate in my subjects.
Created 2012-11-13 16:05:23 | Updated | Tags: fisherstrand strand-bias exome
Hi,
I have seen the definition of strand bias on this site (below) but I need a little clarification. Does the FS filter (a) highlight instances where reads are only present on a single strand and contain a variant (as may occur toward the end of exome capture regions) or does it (b) specifically look for instances where there are reads on both strands but the variant allele is disproportionately represented on one strand (as might be indicative of a false positive), or does it (c) do both?
I had thought it did (b) but have encountered some disagreement.
** How much evidence is there for Strand Bias (the variation being seen on only the forward or only the reverse strand) in the reads? Higher SB values denote more bias (and therefore are more likely to indicate false positive calls.
|
{}
|
Construction of BrowseFragment – Android TV application hands on tutorial 2
[Update 2015.11.17: revise]
Construction of BrowseFragment
In this chapter, we will implement a combination of header and selectable objects (so called “cards”). But before going to real implementation, it’s good to understand the construction of BrowseFragment. You can also read the source code in the sdk (android/support/v17/leanback/app/) by yourself.
Let’s start explaination by using Android TV sample application. When you launch application, the contents are aligned in a grid structure. Each header title on the left have a each contents row, and this header – contents row relationship is one to one. This “header + contents row” combination is represented by ListRow. Body of BrowseFragment is a set of ListRow (I will use the term RowsAdapter in this post).
In the below picture, ListRow is represented by blue circle. And a blue square is a RowsAdapter, which is as set of blue circle.
Set of ListRow constructs RowsAdapter, which makes main body UI of BrowseFragment.
Next, let’s look inside the ListRow in more detail. The contents of the header is specified by ArrayObjectAdapter (I call RowAdapter in this post), which is a set of Object (I call CardInfo or Item in this post).
This CardInfo can be any object, and as it will be explained in detail later, how to show this CardInfo can be specified by Presenter class.
Construction of each ListRow.
To summarize,
ArrayObjectAdapter (RowsAdapter) ← A set of ListRow
ListRow = HeaderItem + ArrayObjectAdapter (RowAdapter)
ArrayObjectAdapter (RowAdapter) ← A set of Object (CardInfo/Item)
Presenter class
The design of card is determined by Presenter. Presenter defines how to show/present cardInfo. Presenter class itself is an abstract class, so you need to extend this class for your app’s suitable UI design.
When you extend Presenter, you need to override at least below 3 methods.
• onCreateViewHolder(Viewgroup parent)
• onBindViewHolder(ViewHolder viewHolder, Object cardInfo/item)
• onUnbindViewHolder(ViewHolder viewHolder)
For the details of methods, I encourage you to refer source code of Presenter class. Presenter has innerclass ViewHolder which has the reference to the View. You may access the View via viewHolder at specific event (onBind, onUnbind, etc.) listener callback method.
Let’s proceed to hands on. Here, we will implement GridItemPresenter class.
In this sample application, Object (CardInfo/item) is String type and viewHolder holds TextView reference to show this String
The layout of view is defined in the onCreateViewHolder().
Argument of onBindViewHolder(), we can access viewHolder created by onCreateViewHolder and also Object (CardInfo/item), which stores card information (In this example, just a String).
After defining your own Presenter, you only need to set RowsAdapter at the start time of Activity. You can do so in onActivityCreated() in MainFragment,
So whole source code of MainFragment will be.
so that MainFragment can refer the background color setting.
(III) Build and Run!
Now you can see header & contents combination is implemented.
Note that we only defined Presenter, and load items to show. Other animaiton, e.g. when you select item, it will be bigger, is already implemented in SDK. So even non-designer, it’s easy to make certain level of UI for Android TV application.
Source code is uploaded on github.
See next post,How to use Presenter and ViewHolder? – Android TV application hands on tutorial 3, for CardPresenter implementation which uses ImageCardView to present a card with main image, title, and sub-text.
|
{}
|
missashleyn Group Title Can someone help me please? A wooden pyramid, 12 inches tall, has a square base. A carpenter increases the dimensions of the wooden pyramid by a factor of 5 and makes a larger pyramid with the new dimensions. Describe in complete sentences the ratio of the volumes of the two pyramids. one year ago one year ago
1. Dido525
Do you know what the volume of a pyramid is?
2. missashleyn
1/3 (b)(h)?
3. Dido525
Yep. So in the first case, what would the volume be? You to to find the ration you divide the new value by the old value right?
4. Dido525
To find the ratio*
5. missashleyn
Well I know the height, but what's the base?
6. Dido525
No need. You got $v=\frac{ 1 }{ 3 }A*h$
7. Dido525
h=12.
8. Dido525
Therefore: $v=\frac{ 1 }{3 } * A *12$ = $V=4A$
9. Dido525
Now we are given the ara of the base was increased by a factor of 5. In other words, it was multiplied by 5.
10. missashleyn
So, 20?
11. Dido525
Ratios mean you divide, not multiply.
12. missashleyn
Now I'm confused :(
13. Dido525
14. missashleyn
I thought you said it was multiplied by 5?
15. Dido525
Haha, yeah for the new volume. But when you find a ration you divide the new value by the old value.
16. Dido525
ratio*
17. missashleyn
So... what is the ratio of the volumes of the 2 pyramids? o.o
18. Dido525
Lol, I don't want to give you an answer :P . I would rather you understand it.
19. Dido525
Allright better explanation.
20. Dido525
It's a 1/5 ratio right?
21. Dido525
Because we scale the pyramid by a factor of 5.
22. Dido525
1:5 ratio sorry.
23. Dido525
Ignore all the other parts.
24. missashleyn
I'm seriously so confused it's not even funny! Lol okay so the dimensions are increased by a facor of 5. So you're saying the ratio of the 2 pyramids is 1:5?
25. Dido525
Ahh but remember, since this is volume, we cube that 5.
26. Dido525
1/5^3
27. Dido525
or 1:125 which can also be 1/125.
28. missashleyn
My head is starting to hurt..so I cube the ratio 1:5? WHAT? Lol you have to break it down for me all at once. I don't do step by step :b
29. Dido525
Allright.
30. Dido525
The scale factor of the Pyramid is 1:5 right? This can be re-written as 1/5 . So the ratio would be 1:5 but since we are dealing with volumes, We have to cube that ratio or in other words, 1/5^3 or 1/125. We can say the New pyramid is 125x more greater than the old pyramid.
31. missashleyn
Is that the answer or are you waiting for me to answer? Lol I don't know how to do this.
32. Dido525
What don't you understand?
33. Dido525
I want to try and help :c .
34. missashleyn
I'm confused on the entire thing lol, I mean I'd like to understand how to get the right answer so I can fully explain my answer in complete sentences but at this point my head is starting to hurt and I just want to finish it. Lol
35. dumbcow
initial dimensions; height = 12 let the square base have dimensions of 1 so initial volume = 4 new dimensions, multiply lengths by 5 height = 60 square base have sides of 5 .... so area of 25 new volume = (1/3)(25)(60) = 500 volume ratio = 500/4 = 125 does that help?
36. dumbcow
in general, the rule is that the volume ratio = (length ratio)^3 --> 5^3 = 125
37. missashleyn
Yes! Thank you both for helping me.. sorry I get confused so easily! I appreciate you both! :)
38. dumbcow
:)
|
{}
|
# Upper incomplete gamma integral
I would like to know whether the following relation is correct or not?
$\frac{d}{dz}\Gamma(w,\mu z)= -\mu^wz^{w-1}e^{-\mu z}$,
where $\Gamma(w,\mu z)$ is the upper incomplete gamma integral.
Can anyone provide me a reference for the above relation if it is correct?
I assume $w, \mu$ do not depend on $z$. From the definition as an integral or as an explicit reference e.g. http://functions.wolfram.com/GammaBetaErf/Gamma2/20/01/02/0001/ we have $$\frac{\partial}{\partial z} \Gamma(a,z) = -e^{-z}z^{a-1}$$ and therefore with the chain-rule $$\frac{\partial}{\partial z}\Gamma(w,\mu z)= -e^{-\mu z}(\mu z)^{w-1}\mu = -\frac{e^{-\mu z}}{z}(\mu z)^w$$ The difference between this an your expression is $$-\frac{e^{-\mu z}}{z}((\mu z)^w - \mu^w z^w)$$ which vanishes if $(\mu z)^w= \mu^w z^w.\;$ Thus the validity of the relation depends on the domains of $\mu, z, w;\,$ it is correct e.g. for real positive values.
|
{}
|
SEARCH HOME
Math Central Quandaries & Queries
brian, a student: Three circles with radii 3,4 and 5 touch each other. The circles are tangent to each other. What is the area of the triangle formed by the centers of the circles?
Hi Brian.
One side must be 3+4 units long, another side must be 4+5 units long and the last side must be 3+5 units long.
Thus you know the lengths of all the sides of the triangle, so you can use Heron's Formula to calculate its area.
Cheers,
Stephen La Rocque
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences.
|
{}
|
Torsion groups
Definition
See at torsion subgroup.
Torsion groups and group homomorphisms form a category $Tor$; abelian torsion groups and group homomorphisms form a category $Ab Tor$.
References
Revised on August 26, 2014 23:15:16 by Urs Schreiber (82.136.246.44)
|
{}
|
Lessons in OOP derived from a program written to produce the classic song.
## Rediscovering Simplicity
### Simplifying code
4 examples of different ways to solve a problem
1. Incomprehensibly concise
Brevity is prioritized. Difficult to understand, even to original programmer
2. Speculatively general
Optimizes too soon. Spends time abstracting it ways that might not be necessary. Wastes time, solves wrong problems.
3. Concretely abstract
Prioritizes abstraction without consideration for the problem domain. Example: Creating a beer method that returns the string ‘beer’. Problematic, because the concept that’s relevant isn’t ‘beer’. It’s a beverage.
4. Shameless green
Straightforward. Passes tests. Not optimized. Reduendant. If nothing changes, it will be good enough. Prioritizes understandability over changeability.
### Judging Code
How do you know which is best? There are many opinions about what good code looks like, but they aren’t very useful. There are metrics that can be used to compare different approaches.
1. SLOC or LOC (Source lines of code)
Very generally, less is better. However, it’s such an ambiguous number that it borders on uselessness. A bad programmer will probably solve a problem with more lines than a good one. But overly concise code can be problematic, as well. Still, it’s a point of reference.
2. Cyclomatic Complexity
A metric created by Thomas J. McCabe, Sr. to identify code that is difficult to test or maintain. The CC algo counts unique execution paths in a program. Related to number of conditionals. Makes no claims beyond its objective count of execution paths.
3. Assignments, branches and conditions (ABC)
“Cognitive size” of code. The more complex code is (in more dimensions than execution paths), the more difficult it is to reason about. Flog is a popular Ruby library to measure ABC.
With these metrics, shameless green emerges as best choice
### Summary
• Use TDD to find shameless green.
• Don’t waste time solving problems that are not confirmed to be the right problem
• Start with ‘good enough’ solutions, and they’ll remain so until they need to be changed.
## Test Driving Shameless Green
• Red/green/refactor is the process of writing a failing test, then getting it to pass, then refactoring when necessary
• Writing first tests are the hardest, and they decrease with difficulty the more (good) tests are written. That’s because you learn something about the problem domain with each test, so every new test means you know more about the problem.
• Tests should be small
• Tests should drive out very incremental changes
• Resist the temptation to code or test ahead. Respect the process.
• This often means writing “dumb” solutions.
• Code forward. Make progress on the current problem and don’t get sidetracked by seeming low hanging fruit or good ideas. Respect the process.
### Removing duplication
• Isolate what changes and what doesn’t
• Dumb solution
def verse(number)
if number == 99
"99 bottles of beer on the wall, " +
"99 bottles of beer.\n" +
"Take one down and pass it around, " +
"98 bottles of beer on the wall.\n"
else
"3 bottles of beer on the wall, " +
"3 bottles of beer.\n" +
"Take one down and pass it around, " +
"2 bottles of beer on the wall.\n"
end
end
• Separating what changes and what doesn’t
def verse(number)
if number == 99
n = 99
else
n = 3
end
"#{n} bottles of beer on the wall, " +
"#{n} bottles of beer.\n" +
"Take one down and pass it around, " +
"#{n - 1} bottles of beer on the wall.\n"
end
• Using the knowledge from above to create a smart solution
def verse(number)
"#{number} bottles of beer on the wall, " +
"#{number} bottles of beer.\n" +
"Take one down and pass it around, " +
"#{number-1} bottles of beer on the wall.\n"
end
### Transformations
This blog lists different types of transformations and ranks them in order of simplicity. When there are multiple ways to accomplish something, choose the simplest transformation.
### Hewing to the Plan
• Use key words and abstractions to reveal intention.
• Example: You can use if statements to accomplish what case statements can do, but they tell different stories. if chains compare distinct data. Case will make decisions based on the outcome of a single comparison.
• Tests can reveal responsibilities in code,
• Tests should be dumb. Tests are not the place for abstractions or DRY. Example: Hardcode the expected result for all 99 verses for the song methods test.
## Unearthing Concepts
Shameless Green prioritizes understandability over changeability. So what happens when you need to change the code?
Imagine a new requirement rolls in to replace every appearance of ‘6 bottles’ with ‘a 6 pack’.
• Still don’t over optimize. Only solve for the new requirement.
• Extending the case statement introduces untenable duplication.
• So how do we know what to change and how to change it?
### Open/Closed Principle
• One of the pillars of SOLID.
• Says an object should be open for extension, but closed for modification. In other words, keep the processes of refactoring and adding features separate – Don’t refactor and add a feature at the same time.
• Before adding a feature, the code must be ‘open’ to the change. If it’s not open, make it open. If you don’t know how to make it open, fix the easiest code smell. Repeat until you can open it.
### Code smells
• Anti patterns with prescribable refactors
• Some are obvious (Duplication)
• Others less so (Shotgun Surgery)
• Read Martin Fowler’s refactoring book.
• If you don’t know code smells, list what you don’t like about the code. It probably corresponds to a smell.
• You don’t need to fix all of them. Choose the easiest to fix, fix it, then see if you can open the code to the new feature.
• Current smells in our code are the switch statement and duplication. Duplication is easier, so let’s fix that first by refactoring.
### Refactoring Systematically
• Refactoring improves codes internal structure without changing its behavior.
• Refactoring is how you open existing code to new features.
• Tests are the guardrails that allow safe, quick and effective refactoring.
• If a refactor breaks a test, then either
• You accidentally changed the code behvior, and you need to try again
• Or, your tests are too tightly coupled to your code
• Tests measure code’s output, not its internals. Therefore, ‘Never change tests during refactoring’
### Flocking Rules
• The duplication in the shameless green is a result of an unidentified abstraction.
• The question is, how are these different verses actually the same?
• Flocking rules are steps to that help identify unclear abstractions
• Name comes from behavior of animal flocks. Small, individual decisions reveal larger abstractions (Movement of a flock of birds looks cohesive but is product of repeated application of small decisions from individual members.)
1. Select the things that are most alike.
2. Find the smallest difference between them.
3. Make the simplest change that will remove the difference.
1. Parse new code
2. Parse and execute it
3. Parse, execute and use it
4. Delete unused code
• Make small changes so errors are helpful.
• If you encounter an error you don’t understand, back up and make smaller changes.
### Converging on Abstractions
• DRYing out sameness has some value.
• DRYing out differences has more value.
• If two examples of an abstraction have a difference, that difference is a smaller abstraction.
• Programmers have a bad habit of merely believing they understand the abstraction then inventing a solution.
• A better habit is to use small, iterative refactors to find a solution. Like the flocking rules.
It is common to find that hard problems are hard only because the easy ones haven’t been solved yet. Therefore, don’t discount the value of solving easy problems.
• The most alike parts of the code are verse 2 and all the others. The only difference is ‘bottle’ vs ‘bottles’.
• That difference is the concept that needs an abstraction.
• General rule: The name of the concept is often one level of abstraction higher than the the thing.
• Since the diff is bottle/bottles (And we know we have a new requirement to replace them with ‘six-pack’ when there are 6 bottles) ‘container’ is a reasonable abstraction.
• Tool to find name for an abstraction. Lay out knowns in a table. The name for the unknown column might be a usable abstraction.
Number ???
1 bottle
6 six-pack
n bottles
### Making Methodical Transformations
Making a slew of simultaneous changes is not refacotring - it’s rehacktoring.
• Running with this new found abstraction and writing the method, then implementing it by changing existing code introduces too many changes.
• Refactor by following the flocking rules and the substeps
• Generally
• Change one line at a time
• Run tests after every change
• If a test fails, find a better change (Don’t change tests during refactoring)
• To keep changes small and code deployable, employ ‘Gradual Cutover Refactoring’
• Allow changes to be gradually adopted. Example: Use default argument values to avoid having to update sender and receiver.
There are plenty of hard problems in programming, but [refactoring] isn’t one of them.
### Summary
• Separate refactoring and adding new features
• First, open code to extension
• Then, extend it
• If it is not obvious how to open code, start identifying and resolving code smells until a path opens.
• Opening code often requires identifying abstractions. Use flocking rules to do so.
## Ch. 4 Practicing Horizontal Refactoring
• Follow flocking rules!
1. Find most similar cases
2. Find the smallest difference between them
3. Make the smallest change to remove the difference
• ‘Sameness’ increases with abstraction
### Finding good names
• Good names reflect concepts. Some concepts are simple, like ‘containers’. Some are not.
• When concept is not easy to name, consider one of these strategies:
1. Spend 10 minutes with a thesaurus to find a good name. Use the best option found on the grounds that it’s good enough, and you can change it in the future.
2. Use a meaningless placeholder name now, like ‘foo’, and assume (hope) that a more appropriate name will reveal itself as code evolves.
3. Ask someone else for help, maybe domain experts or people who are good at naming things.
Code is read many more times than it is written, so anything that increases understandability lowers costs.
### Liskov Substitution Principle
Subclasses should be substitutable for their superclasses
(It’s the ‘L’ in SOLID.)
• Every piece of knowledge is a dependency.
• Dependencies make code harder to maintain.
• We should strive to reduce dependencies.
• We can reduce dependencies by writing code that requires little knowledge.
Don’t refactor under red. Return to green and make incremental changes until you regain clarity.
• The more confused you are, the slower and more incrementally you should move.
• Refactoring exposes concepts which can be turned into abstractions
• Abstractions can be dinged by static analysis tools, but recall “Code is read many more times than it is written, so anything that increases understandability lowers costs.”
## Ch. 5 Separating Responsibilities
• Previously used flocking rules to reduce duplication and aid refactoring.
• Motivation was incoming 6 pack requirement.
• Required code to be flexible to expansion, so we refactored to ‘open’ it.
• Open/Close principle. Code should be open for extension and closed for modification, and vice versa. Do one at a time.
### 5.1.1
• Ponder the existing code. What do you like, hate or not understand?
• There’s no safety net in verses. Starting could be greater than ending.
• There’s no separation of private methods. Maybe author’s effort to keep focus on refactoring?
• Love the verse method, especially in contrast to shameless green.
• Every method we abstracted previously follows the same pattern. It has a single argument (of the same name) and two execution paths.
• Everything except sucessor returns a string.
• Class is stateless
• Sameness and difference should indicate information.
• Following flocking rules leads to predictably similar/different code.
Superfluous difference raises the cost of reading code, and increases the difficulty of future refactorings.
• While many forms of code can accomplish a task, choosing the right expression of code makes it more understandable, expandable and lowers costs.
• For example, using equality operators for number == 0 instead of number < 1 since number will never be negative.
• Names should reflect concepts, even if they point to the same value.
• number sent to verse is a different concept than number sent to container.
### Insisting upon messages
• OO should avoid conditionals that control behavior, as in the flocked five.
• The flocked five come from shameless green and still favor understandability over changeability.
• New requirements necessitate a ‘full-blown OO mindset’, including refactoring to reveal all the domain concepts, including classes.
Code is striving for ignorance, and preservin ignorance reqires minimizing dependencies.
### Extracting Classes
• The predominant code smell in the app at this point is ‘Primitive Obsession’, which means we are using a primitive to represent a concept. The cure is to follow the ‘Extract Class’ recipe.
• The class we need to extract is for ‘bottle number’. It is not a type of bottle. It is a type of number, and that’s a very abstract concept.
The power of OO is that it lets you model ideas … Model-able ideas often lie dormant in the interactions between other objects.
• Consider a ticket management app:
• Buyer and Ticket are two obvious classes that point to concrete things.
• Discount and Refund are potential classes that model ideas.
### Naming classes
• Whereas methods should be named at one level of abstraction higher than the thing being named, classes should be named after what they are.
• This can be revisited when new information arrives.
• So we call the new class BottleNumber.
### Steps for refactoring methods or classes
Don’t modify code internals during this process
1. Parse the new code.
1. Copy new code into new place.
2. Don’t invoke it.
3. Run tests.
4. Demonstrates the parser can make sense of the new code.
2. Parse and execute it.
1. Inject calls to the new code alongside (before) the previous code.
2. Don’t remove the previous usage. Still using result of original implementation.
3. This demonstrates the code can execute without blowing up.
3. Parse, execute and use it.
1. Use result of new code by moving it after the previous code.
4. Delete unused code.
1. If tests pass, delete old code
This is an extremely mechanical, wonderfully boring, and deeply comforting refactoring process.
In real world applications, the same method name is often defined several times, and a message might get sent from many different places. Learning the art of transforming code one line at a time, while keeping the tests passing at every point, lets you undertake enormous refactorings piecemeal.
• One tactic to safely remove existing code while keeping tests green:
1. use temporary default values
2. remove code that provides the value where there is a new default
3. remove the default
### 5.3 Immutability
State is the particular condition of something at a specific time.
• Functional programming paradigms are immutable, meaning that variables are unnecessary.
• New values are created anew, instead of updating previous values.
• Benefits:
• Easier to reason about
• Easier to test
• Thread safe
• Cons:
• Less performant. Creating new objects is more costly than updating values.
• Often this is incorrectly assumed to be an intolerable loss. Be sure to check whether it really is an unacceptable performance cost.
### Assumptions about performance
The benefits of immutability are so great that, if it were free, yopu’d choose it every time.
• Costs:
• Accepting (to many programmers) new idea.
• Creating many more objects
• Immutability has parallels to caching.
• Storing a value that is expensive to retrieve.
• Assumptions:
• Will improve performance
• Will reduce costs
• Sometimes true, not always.
• Trying to predict those outcomes before writing code is often a fool’s errand.
• Metz’ Strategy:
• Write understandable, maintainble, ‘fast enough’ code.
• Collect real metrics.
• When metrics reveal unacceptable performance, improve.
• The alternative is to waste resources optimizing the wrong thing.
The first solution to any problem should avoid caching, use immutable objects, and treat object creation as free.
### Ch. 5 Summary
• The goal is to open the code to a new feature
• Identify code smells, fix them.
• Refactor horizontally before embarking on vertical tangents.
• Follow recipes, even though the result is not ‘perfect’ code.
## Ch. 6 Achieving Openness
• After a lot of work, code still isn’t open to refactoring. Question becomes, continue on path or retreat?
• There are signs the code is moving in a good direction. Concepts are isolated, and the things that need to change have mostly been extracted.
• Some metrics are lower, but outweighed by benefit of changeable, understandable code.
### An illustrative code smell
• Our code sorta demonstrates another code smell, data clumping.
• Data clumping occurs when data fields routinely occur together.
• container and quantity emit this smell because they occur side by side in 3 out of 4 lines.
• Data clumping definition counts 3 data fields appearing repeatedly, but this is a good example, so roll with it =)
• Evidence that a new concept is waiting to emerge.
• Metz’ solution is to override the default to_s method of BottleNumber with one that prints container and quantity.
## 6.2 Making Sense of Conditionals
• The similarly structured if statements in BottleNumber reek of the Switch Statement code smell.
• One curative recipe is to replace the conditionals with polymorphism.
• Adds new objects that intelligently return values previously determined with conditionals.
• Increases dependencies, so do so carefully
• Assessing conditionals reveals they only care if bottle number is 0 or 1. Everything else is a default response.
• Since there are 3 conditions, we can solve this problem with 3 classes: BottleNumber and two subclasses of it: BottleNumber0 and BottleNumber1.
• Now the questions is, how do we select the correct class to use for each verse?
• With an object factory
• Isolate conditional logic to single function that selects appropriate class for the condition.
• Leans on Liskov substitution principle
• Recipe for replacing conditionals is
1. Create a subclass for the value determining the different branches. In our case, BottleNumber1 for all conditials depending on if number == 1.
1. Copy one method relying on that switch into the new subclass
2. In the subclass, remove everything but the true branch of the conditional.
1. Create a factory that will return the appropriate class. So a bottle_number_for method that returns either BottleNumber or BottleNumber1.
2. If factory already exists, just add the subclass.
3. In the superclass method, remove everything but the false branch.
4. Repeat a-c until all methods relying on that conditional are transferred.
2. Repeat until all methods relying on the switching value (So all methods that are conditionals based on what the number of the bottle is) are transferred into appropriate subclasses.
### The Truth of Conditionals
It’s hard to avoid them. But you can restrict them to a single place. The code now, using a factory, no longer determines behavior based on a conditional, it simply routes the input to the appropriate polymorphic handler.
#### A conditonal-less solution
There are strategies for truly avoiding conditionals. Here’s a simple one:
class BottleNumber
def self.for(number)
begin
const_get("BottleNumber#{number}")
rescue NameError
BottleNumber
end.new(number)
end
end
Is the increased complexity and unconventional use of error handling as control flow worth avoiding the conditional? Depends on the situation.
### Emerging Concepts
When we started with this project, the emphasis was on, “How do the verses differ from each other?” Now, after identifying code smells and following refactoring recipes, the emphasis has shifted to handling differences in bottle numbers. The verses are treated as givens with differences based on bottle numbers injected. This was not an obvious reality of the problem domanin when we started.
## Fixing the Liskov violation
• The current implementation of successor violates the Liskov principle because the successor to a BottleNumber should be a BottleNumber, but instead it is a number.
• To fix
• Move the bottle_number_for factor to BottleNumber and turn it into a class method named for.
• To prevent the factory from failing during the refactor, add a temporary guard that simply returns the argument if it is already a BottleNumber.
• Modify the successor implementations to return a BottleNumber
• Modify the successor senders to expect a BottleNumber
• Delete the guard
## Adding the feature
• At this point the refactorings are complete.
• The code is closed to refactoring, which opens it to expansion.
• In other words, we will write our first failing test in awhile which needs to be passed with expansion.
• Satisfying the six pack requirement requires adding a BottleNumber6 class and choosing it appropriately with the factory.
• Done!
### The Easy Change
This modification was extremely easy, especially in comparison to the amount of work we put into refactoring the code in preparation for the change. This is how things should be. Work hard to make code expandable, so the expanding is easy. She supplies this apt quote from Kent Beck:
make the change easy (warning: this may be hard), then make the easy change.
## Further Reading
• https://www.amazon.com/Design-Patterns-Elements-Reusable-Object-Oriented/dp/0201633612
• http://blog.8thlight.com/uncle-bob/2013/05/27/TheTransformationPriorityPremise.html
• http://martinfowler.com/books/refactoring.html
|
{}
|
## Programme
Time Monday, Nov 26 Tuesday, Nov 27 9:00-9:30 Registration/Coffee Registration/Coffee 9:30-10:30 Fatemeh Mohammadi Kaie Kubjas 10:30-11:00 Coffee break Coffee break 11:00-12:00 Fabio Rapallo Nina Otter 12:00-13:30 Lunch Lunch 13:30-14:30 Eva Riccomagno Paul Breiding 14:30-15:00 Coffee break Coffee break 15:00-16:00 Henry Wynn Hugo Maruri-Aguilar 16:00-17:00 Christian Haase
• The conference dinner will take place at 19:00hr at Culinaria.
# Titles and Abstracts
Title : Learning Bayesian Networks Using Generalized Permutohedra
Abstract : Graphical models (Bayesian networks) based on directed acyclic graphs (DAGs) are used to model complex cause-and-effect systems. A graphical model is a family of joint probability distributions over the nodes of a graph which encodes conditional independence relations via the Markov properties. One of the fundamental problems in causality is to learn an unknown graph based on a set of observed conditional independence relations. In this talk, I will describe a greedy algorithm for DAG model selection that operate via edge walks on so-called DAG associahedra. For an undirected graph the set of conditional independence relations are represented by a simple polytope known as the graph associahedron, which can be constructed as a Minkowski sum of standard simplices. For any regular Gaussian model, and its associated set of conditional independence relations we construct the analogous polytope DAG associahedon which can be defined using relative entropy. For DAGs we construct this polytope as a Minkowski sum of matroid polytopes corresponding to Bayes-ball paths in graph.
This is a joint work with Caroline Uhler, Charles Wang, and Josephine Yu.
Speaker: Fabio Rapallo
Title: Circuits in experimental design
Abstract: In the framework of factorial design, a statistical (linear) model is defined through an appropriate model matrix, which depends on the experimental runs and encodes their geometric structure. In this seminar we descuss some properties of the circuit basis of the model matrix in connection with two well-known properties of the designs, namely robustness and D-optimality. Exploiting the identification of a fraction with a binary contingency table, we define a criterion to check whether a fraction is saturated or not with respect to a given model and we generalize such a result in order to study the robustness of a fraction by inspecting its intersections with the supports of the circuits. Using some simulations, we show that the combinatorial description of a fraction with respect to the circuit basis is strictly related to the notion of D-optimal fraction and to other optimality criteria. This talk is based on joint work with Henry Wynn.
Speaker: Eva Riccomagno
Title: Discovering statistical equivalence classes of discrete statistical models using computer algebra
Abstract: We show that representations of certain polynomials in terms of a nested factorization are intrinsically linked to labelled event trees. We give a recursive formula for the construction of such polynomials from a tree and present an algorithm in the computer algebra software CoCoA to derive all tree graphs from a polynomial. We finally use our results in applications linked to staged tree models. (Joint work with Anna Bigatti (University of Genova, Italy), Christiane Goergen and Jim Q. Smith (The University of Warwick, UK)
Speaker: Christian Haase
Title: Tell me, how many modes does the Gaußian mixture have . . .
Abstract: Gaussian mixture models are widely used in Statistics. A fundamental aspect of these distributions is the study of the local maxima of the density, or modes. In particular, it is not known how many modes a mixture of k Gaussians in d dimensions can have. We give improved lower bounds and the first upper bound on the maximum number of modes, provided it is finite. (joint work with Carlos Améndola and Alexander Engström)
Speaker : Henry Wynn
Title : On Rational Interpolation
Abstract: This continues work on general multivariate interpolation in [1] based on the the work of Bekker and Weispfennig [2]. The use of G-bases in polynomial interpolation is well understood and has been used by the authors and co-workers particularly to gain understanding of the identifiability of polynomial regression models over particular designs. The approach taken here is to include both the output variable $y$ and its values $y_i$ on arbitrary algebraic varieties $V_i, i = 1,\ldots,k$, respectively, in the formulation of the interpolation problem; that is to say interpolating between the varieties. By careful specification of the problem, as in [1] ands [2], it is possible to obtain rational interpolators of the form:
$$y(x) = \sum_i y_i \frac{P_i(x)}{Q_i(x)}.$$
Under special conditions it is possible to choose the denominators, $Q_i(x)$ to be non-zero, thus avoiding poles. It is a challenge to make links to more standard rational interpolation such as NURBS (non-uniform rational basis splines). The application to experimental design and the possibility of a new type of rational Sobolev smoothing is also mentioned.
[1] Maruri-Aguilar, H. & Wynn, H. P. (2008). Generalised design: Interpolation and statistical modelling over varieties. Algebraic and Geometric Methods in Statistics. Cambridge University Press. 159-173.
[2] Becker, Y & Weispfennig, V. (1991). The chinese remainder problem, multivariate interpolation and Grobner bases. in Proc. ISSAC '91 (Bonn Germany), 64-9.
Speaker : Kaie Kubjas
Title : Geometry and maximum likelihood estimation of the binary latent class model
Abstract : The binary latent class model consists of binary tensors of nonnegative rank at most two inside the standard simplex. We characterize its boundary stratification with the goal to use this stratification for exact maximum likelihood estimation in statistics. We explain two different approaches for deriving the boundary stratification: by studying the geometry of the model and by using the fixed points of the Expectation-Maximization algorithm. In the case of 2x2x2 tensors, we obtain closed formulas for the maximum likelihood estimates. This talk is based on the joint work with Elizabeth Allman, Hector Banos Cervantes, Robin Evans, Serkan Hosten, Daniel Lemke, John Rhodes and Piotr Zwiernik.
Speaker: Nina Otter
Title: Computable invariants for multiparameter persistent homology and their stability
Abstract: Persistent homology (PH) is arguably one of the best known methods in topological data analysis. PH allows to study topological features of data across different values of a parameter, which one can think of as scales of resolution, and provides a summary of how long individual features persist across the different scales of resolution. In many applications, data depend not only on one, but several parameters, and to apply PH to such data one therefore needs to study the evolution of qualitative features across several parameters. While the theory of 1-parameter PH is well understood, the theory of multiparameter PH is hard, and it presents one of the biggest challenges of topological data analysis. In this talk I will briefly introduce persistent homology, and then explain how tools from commutative algebra give computable invariants for multiparameter PH, which are able to capture homology classes with large persistence. I will then discuss efficient algorithms for the computation of these invariants, as well as stability questions. This talk is based on joint work with A. M. del Campo, H. Harrington, H. Schenck, U. Tillmann, and L. Waas.
Speaker: Paul Breiding
Title: Monte Carlo Homology
Abstract: Persistent homology is a tool to estimate the homology groups of a topological space from a finite point sample. The underlying idea is as follows: for varying t, put a ball of radius t around each point and compute the homology of the union of those balls. The theoretical foundation is a theorem by Niyogi, Smale and Weinberger: under the assumption that the finite point sample was drawn from the uniform distribution on a manifold, the theorem tells us how to choose the radius of the balls and the size of the sample to get the correct homology with high probability. In practice, however, the assumptions of the theorem are hard to satisfy. This is why persistent homology looks at topological features that persists for large intervals in the t-space. In this talk, I want to discuss how one could satisfy the assumptions of the Niyogi-Smale-Weinberger Theorem for manifolds that are also algebraic varieties. The algebraic structure opens the path to sampling from the uniform distribution and to computing the appropriate radius of balls. We get an estimator for the homology of the variety that returns the correct answer with high probability.
Speaker : Hugo Maruri-Aguilar
Title : Lasso and model complexity
Abstract: The statistical technique of Lasso (Tibshirani et al, 1996) is built around weighted penalisation of the error term by the absolute sum of coefficients. As the control parameter increases in value, model coefficients shrink progressively towards zero, thus providing the user with a collection of models that start from the ordinary least squares regression model and end with a model with no terms.
This work gravitates around hierarchical squarefree regression models. These models can be seen as simplicial complexes and thus a measure of complexity is given by Betti numbers of the "model/complex". We detail our computations and implementation of the methodology and illustrate our proposal with simulation results and also apply the methodology to a dataset from the literature.
This is joint work with S. Hu (Queen Mary).
Speaker: Emil Horobet (This talk is cancelled)
Title: Multidegrees of the extended likelihood correspondence
Abstract: Maximum likelihood estimation is a fundamental problem in statistics. For discrete statistical models the EM algorithm aims to solve this. One of the drawbacks of this algorithm is that the optimal solution either lies in the relative interior of the model or it lies in the model’s boundary (which we can consider as a submodel). We want to characterize those data which have at least one critical point on a given submodel (for example the boundary of the respective model). In order to do this we consider the extended likelihood correspondence, which is the graph of the conormal variety under the Hadamard product. We develop bounds on the algebraic complexity (multidegrees) of computing MLE on these submodels. This talk is based on joint work with J.I Rodriguez.
Letzte Änderung: 21.02.2019 - Ansprechpartner: Webmaster
|
{}
|
## Journal of Computational Mathematics
Short Title: J. Comput. Math. Publisher: Global Science Press, Hong Kong; Chinese Academy of Sciences, Institute of Computational Mathematics, Beijing ISSN: 0254-9409; 1991-7139/e Online: https://www.global-sci.org/jcm/periodical_list.html Comments: Indexed cover-to-cover
Documents Indexed: 1,844 Publications (since 1983) References Indexed: 69 Publications with 2,244 References.
all top 5
### Latest Issues
40, No. 6 (2022) 40, No. 5 (2022) 40, No. 4 (2022) 40, No. 3 (2022) 40, No. 2 (2022) 40, No. 1 (2022) 39, No. 6 (2021) 39, No. 5 (2021) 39, No. 4 (2021) 39, No. 3 (2021) 39, No. 2 (2021) 39, No. 1 (2021) 38, No. 6 (2020) 38, No. 5 (2020) 38, No. 4 (2020) 38, No. 3 (2020) 38, No. 2 (2020) 38, No. 1 (2020) 37, No. 6 (2019) 37, No. 5 (2019) 37, No. 4 (2019) 37, No. 3 (2019) 37, No. 2 (2019) 37, No. 1 (2019) 36, No. 6 (2018) 36, No. 5 (2018) 36, No. 4 (2018) 36, No. 3 (2018) 36, No. 2 (2018) 36, No. 1 (2018) 35, No. 6 (2017) 35, No. 5 (2017) 35, No. 4 (2017) 35, No. 3 (2017) 35, No. 2 (2017) 35, No. 1 (2017) 34, No. 6 (2016) 34, No. 5 (2016) 34, No. 4 (2016) 34, No. 3 (2016) 34, No. 2 (2016) 34, No. 1 (2016) 33, No. 6 (2015) 33, No. 5 (2015) 33, No. 4 (2015) 33, No. 3 (2015) 33, No. 2 (2015) 33, No. 1 (2015) 32, No. 6 (2014) 32, No. 5 (2014) 32, No. 4 (2014) 32, No. 3 (2014) 32, No. 2 (2014) 32, No. 1 (2014) 31, No. 6 (2013) 31, No. 5 (2013) 31, No. 4 (2013) 31, No. 3 (2013) 31, No. 2 (2013) 31, No. 1 (2013) 30, No. 6 (2012) 30, No. 5 (2012) 30, No. 4 (2012) 30, No. 3 (2012) 30, No. 2 (2012) 30, No. 1 (2012) 29, No. 6 (2011) 29, No. 5 (2011) 29, No. 4 (2011) 29, No. 3 (2011) 29, No. 2 (2011) 29, No. 1 (2011) 28, No. 6 (2010) 28, No. 5 (2010) 28, No. 4 (2010) 28, No. 3 (2010) 28, No. 2 (2010) 28, No. 1 (2010) 27, No. 6 (2009) 27, No. 5 (2009) 27, No. 4 (2009) 27, No. 2-3 (2009) 27, No. 1 (2009) 26, No. 6 (2008) 26, No. 5 (2008) 26, No. 4 (2008) 26, No. 3 (2008) 26, No. 2 (2008) 26, No. 1 (2008) 25, No. 6 (2007) 25, No. 5 (2007) 25, No. 4 (2007) 25, No. 3 (2007) 25, No. 2 (2007) 25, No. 1 (2007) 24, No. 6 (2006) 24, No. 5 (2006) 24, No. 4 (2006) 24, No. 3 (2006) 24, No. 2 (2006) ...and 93 more Volumes
all top 5
### Authors
35 Guo, Ben-Yu 26 Lin, Qun 24 Bai, Zhongzhi 24 Shi, Zhongci 23 Han, Houde 20 Yuan, Ya-xiang 19 Sun, Jiguang 19 Yu, Dehao 18 Shi, Dongyang 18 Xu, Guoliang 17 Wang, Lieheng 15 Qin, Mengzhao 15 Shen, Longjun 15 Xu, Jinchao 14 Chen, Yanping 14 Li, Kaitai 14 Sun, Geng 14 Tang, Tao 13 Zhu, Youlan 12 Sun, Jiachang 12 Tang, Huazhong 12 Wang, Ming 12 Wang, Renhong 11 Chen, Shaochun 11 Feng, Kang 11 Huang, Yunqing 11 Shi, Yingguang 11 Xu, Xuejun 11 Zhang, Zhimin 11 Zhou, Yulin 10 He, Yinnian 10 Ma, Changfeng 10 Wu, Huamo 10 Yang, Danping 9 Feng, Minfu 9 Guo, Boling 9 He, Bingsheng 9 Huang, Hongci 9 Li, Shoufu 9 Ming, Pingbing 9 Xie, Xiaoping 9 Yan, Ningning 9 Zhang, Chengjian 9 Zhang, Pingwen 8 Cheng, Xiaoliang 8 Dai, Yu-Hong 8 Jiang, Yaolin 8 Kuang, Jiaoxun 8 Liang, Guoping 8 Liu, Xinguo 8 Lu, Tao 8 Pan, Ping-Qi 8 Shu, Chi-Wang 8 Tang, Yifa 8 Wang, Xinghua 8 Wen, Xin 8 Xiao, Aiguo 8 Zhang, Guanquan 8 Zhang, Lei 8 Zhou, Aihui 7 Hu, Jun 7 Hu, Xiyan 7 Jin, Shi 7 Lin, Yanping 7 Liu, Mingzhu 7 Mao, Shipeng 7 Wang, Daoliu 7 Wang, Deren 7 Wu, Xionghua 7 Yuan, Guangwei 7 Zhang, Tie 7 Zhu, Qiding 6 Carstensen, Carsten 6 Chang, Qianshun 6 Chen, Chuanmiao 6 Chen, Jinru 6 Feng, Yuyu 6 Liao, Anping 6 Liu, Degui 6 Ma, Fuming 6 Sun, Wenyu 6 Wei, Ziluan 6 Ying, Lungan 6 You, Zhaoyong 6 Zhang, Guofeng 6 Zhang, Liansheng 6 Zhang, Sheng 6 Zheng, Shiming 5 Chang, Yanzhen 5 Chen, Guangnan 5 Chen, Zhiming 5 Dai, Hua 5 Han, Weimin 5 Hinze, Michael 5 Hoppe, Ronald H. W. 5 Huang, Jianguo 5 Jiang, Erxiong 5 Kozak, Jernej 5 Křížek, Michal 5 Liu, Jijun ...and 1,793 more Authors
all top 5
### Fields
1,550 Numerical analysis (65-XX) 613 Partial differential equations (35-XX) 190 Operations research, mathematical programming (90-XX) 184 Fluid mechanics (76-XX) 135 Ordinary differential equations (34-XX) 103 Linear and multilinear algebra; matrix theory (15-XX) 89 Mechanics of deformable solids (74-XX) 87 Approximations and expansions (41-XX) 86 Calculus of variations and optimal control; optimization (49-XX) 57 Dynamical systems and ergodic theory (37-XX) 55 Integral equations (45-XX) 52 Optics, electromagnetic theory (78-XX) 33 Information and communication theory, circuits (94-XX) 31 Operator theory (47-XX) 29 Probability theory and stochastic processes (60-XX) 28 Computer science (68-XX) 26 Biology and other natural sciences (92-XX) 22 Mechanics of particles and systems (70-XX) 21 Quantum theory (81-XX) 18 Functions of a complex variable (30-XX) 18 Classical thermodynamics, heat transfer (80-XX) 17 Real functions (26-XX) 17 Statistical mechanics, structure of matter (82-XX) 16 Systems theory; control (93-XX) 14 Harmonic analysis on Euclidean spaces (42-XX) 13 Geophysics (86-XX) 12 Potential theory (31-XX) 12 Statistics (62-XX) 11 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Field theory and polynomials (12-XX) 6 History and biography (01-XX) 6 Combinatorics (05-XX) 5 Algebraic geometry (14-XX) 5 Global analysis, analysis on manifolds (58-XX) 4 Differential geometry (53-XX) 3 General and overarching topics; collections (00-XX) 3 Integral transforms, operational calculus (44-XX) 2 Special functions (33-XX) 2 Functional analysis (46-XX) 1 Mathematical logic and foundations (03-XX) 1 Number theory (11-XX) 1 Sequences, series, summability (40-XX) 1 Geometry (51-XX) 1 Convex and discrete geometry (52-XX) 1 Relativity and gravitational theory (83-XX) 1 Astronomy and astrophysics (85-XX)
### Citations contained in zbMATH Open
1,138 Publications have been cited 7,528 times in 5,992 Documents Cited by Year
On spectral methods for Volterra integral equations and the convergence analysis. Zbl 1174.65058
Tang, Tao; Xu, Xiang; Cheng, Jin
2008
A shift-splitting preconditioner for non-Hermitian positive definite matrices. Zbl 1120.65054
Bai, Zhongzhi; Yin, Junfeng; Su, Yangfeng
2006
Difference schemes for Hamiltonian formalism and symplectic geometry. Zbl 0596.65090
Feng, Kang
1986
On a Hermitian and skew-Hermitian splitting iteration method for continuous Sylvester equations. Zbl 1249.65090
Bai, Zhongzhi
2011
An anisotropic nonconforming finite element with some superconvergence results. Zbl 1074.65133
Shi, Dongyang; Mao, Shipeng; Chen, Shaochun
2005
Local and parallel finite element algorithms for the Navier-Stokes problem. Zbl 1093.76035
He, Yinnian; Xu, Jinchao; Zhou, Aihui
2006
Uniformly-stable finite element methods for Darcy-Stokes-Brinkman models. Zbl 1174.76013
Xie, Xiaoping; Xu, Jinchao; Xue, Guangri
2008
Approximation of infinite boundary condition and its application to finite element methods. Zbl 0579.65111
Han, Houde; Wu, Xiaonan
1985
A relaxed HSS preconditioner for saddle point problems from meshfree discretization. Zbl 1299.65043
Cao, Yang; Yao, Linquan; Jiang, Meiqun; Niu, Qiang
2013
Generalized difference methods on arbitrary quadrilateral networks. Zbl 0946.65098
Li, Yonghai; Li, Ronghua
1999
A set of symmetric quadrature rules on triangles and tetrahedra. Zbl 1199.65081
Zhang, Linbo; Cui, Tao; Liu, Hui
2009
A spectral method for pantograph-type delay differential equations and its convergence analysis. Zbl 1212.65308
Ali, Ishtiaq; Brunner, Hermann; Tang, Tao
2009
On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. Zbl 1224.65133
Bai, Zhongzhi; Guo, Xueping
2010
Non-stationary Stokes flows under leak boundary conditions of friction type. Zbl 0993.76014
Fujita, Hiroshi
2001
Constrained quadrilateral nonconforming rotated $$\mathcal G_{1}$$ element. Zbl 1086.65111
Hu, Jun; Shi, Zhong-ci
2005
A new class of variational formulations for the coupling of finite and boundary element methods. Zbl 0712.65093
Han, Houde
1990
Nonconforming elements in the mixed finite element method. Zbl 0573.65083
Han, Houde
1984
Finite element approximations of symmetric tensors on simplicial grids in $$\mathbb R^n$$: the higher order case. Zbl 1340.74095
Hu, Jun
2015
On the P1 Powell-Sabin divergence-free finite element for the Stokes equations. Zbl 1174.65039
Zhang, Shangyou
2008
Jacobi spectral approximations to differential equations on the half line. Zbl 0948.65071
Guo, Benyu
2000
A new finite element approximation of a state-constrained optimal control problem. Zbl 1199.49067
Liu, Wenbin; Gong, Wei; Yan, Ningning
2009
Antiperiodic wavelets. Zbl 0839.42014
Chen, H. L.
1996
A directional do-nothing condition for the Navier-Stokes equations. Zbl 1324.76015
Braack, Malte; Mucha, Piotr Boguslaw
2014
Two-step modulus-based synchronous multisplitting iteration methods for linear complementarity problems. Zbl 1340.65124
Zhang, Lili
2015
Asymptotic radiation conditions for reduced wave equation. Zbl 0559.65085
Feng, Kang
1984
Local discontinuous Galerkin methods for three classes of nonlinear wave equations. Zbl 1050.65093
Xu, Yan; Shu, Chiwang
2004
Parallel auxiliary space AMG for $$\boldsymbol{H}$$ (curl) problems. Zbl 1212.65128
Kolev, Tzanio V.; Vassilevski, Panayot S.
2009
A note on Jacobi spectral-collocation methods for weakly singular Volterra integral equations with smooth solutions. Zbl 1289.65284
Chen, Yanping; Li, Xianjuan; Tang, Tao
2013
A new stepsize for the steepest descent method. Zbl 1101.65067
Yuan, Ya-Xiang
2006
Symplectic partitioned Runge-Kutta methods. Zbl 0789.65049
Sun, Geng
1993
Construction of canonical difference schemes for Hamiltonian formalism via generating functions. Zbl 0681.70020
Feng, Kang; Wu, Huamo; Qin, Mengzhao; Wang, Daoliu
1989
Stability analysis and application of the exponential time differencing schemes. Zbl 1052.65081
Du, Qiang; Zhu, Wenxiang
2004
The restrictively preconditioned conjugate gradient methods on normal residual for block two-by-two linear systems. Zbl 1174.65014
Yin, Junfeng; Bai, Zhongzhi
2008
Multigrid methods for obstacle problems. Zbl 1199.65401
Gräser, Carsten; Kornhuber, Ralf
2009
On semilocal convergence of inexact Newton methods. Zbl 1142.65354
Guo, Xueping
2007
Some $$n$$-rectangle nonconforming elements for fourth order elliptic equations. Zbl 1142.65451
Wang, Ming; Shi, Zhongci; Xu, Jinchao
2007
A modified Levenberg-Marquardt algorithm for singular system of nonlinear equations. Zbl 1032.65053
Fan, Jinyan
2003
Laguerre pseudospectral method for nonlinear partial differential equations. Zbl 1005.65115
Xu, Chenglong; Guo, Benyu
2002
Proximal point algorithm for minimization of DC function. Zbl 1107.90427
Sun, Wenyu; Sampaio, Raimundo J. B.; Candido, M. A. B.
2003
Fixed-point continuation applied to compressed sensing: Implementation and numerical experiments. Zbl 1224.65153
Hale, Elaine T.; Yin, Wotao; Zhang, Yin
2010
Linear finite elements with high accuracy. Zbl 0577.65094
Lin, Qun; Xu, Jinchao
1985
A posteriori error estimates for finite element approximations of the Cahn-Hilliard equation and the Hele-Shaw flow. Zbl 1174.65035
Feng, Xiaobing; Wu, Haijun
2008
Modified Morley element method for a fourth order elliptic singular perturbation problem. Zbl 1102.65118
Wang, Ming; Xu, Jin-chao; Hu, Yucheng
2006
Optimal Delaunay triangulations. Zbl 1048.65020
Chen, Long; Xu, Jinchao
2004
Variational discretization for optimal control governed by convection dominated diffusion equations. Zbl 1212.65248
Hinze, Michael; Yan, Ningning; Zhou, Zhaojie
2009
Implicit-explicit scheme for the Allen-Cahn equation preserves the maximum principle. Zbl 1374.65154
Tang, Tao; Yang, Jiang
2016
An anisotropic nonconforming finite element method for approximating a class of nonlinear Sobolev equations. Zbl 1212.65457
Shi, Dongyang; Wang, Haihong; Du, Yuepeng
2009
Explicit error estimates for Courant, Crouzeix-Raviart and Raviart-Thomas finite element methods. Zbl 1274.65290
Carstensen, Carsten; Gedicke, Joscha; Rim, Donsub
2012
Error estimates for the recursive linearization of inverse medium problems. Zbl 1240.35574
Bao, Gang; Triki, Faouzi
2010
Robust globally divergence-free weak Galerkin methods for Stokes equations. Zbl 1389.76027
Chen, Gang; Feng, Minfu; Xie, Xiaoping
2016
Shock and boundary structure formation by spectral-Lagrangian methods for the inhomogeneous Boltzmann transport equation. Zbl 1228.76138
Gamba, Irene M.; Tharkabhushanam, Sri Harsha
2010
Optimal error estimates for Nédélec edge elements for time-harmonic Maxwell’s equations. Zbl 1212.65467
Zhong, Liuqiang; Shu, Shi; Wittum, Gabriel; Xu, Jinchao
2009
An unfitted $$hp$$-interface penalty finite element method for elliptic interface problems. Zbl 1449.65326
Wu, Haijun; Xiao, Yuanming
2019
Nonlinear stability of natural Runge-Kutta methods for neutral delay differential equations. Zbl 1018.65101
Zhang, Chengjian
2002
A discontinuous Galerkin method for the fourth-order curl problem. Zbl 1289.76053
Hong, Qingguo; Hu, Jun; Shu, Shi; Xu, Jinchao
2012
A note on simple non-zero singular values. Zbl 0662.15008
Sun, Jiguang
1988
A priori error estimate and superconvergence analysis for an optimal control problem of bilinear type. Zbl 1174.49002
Yang, Danping; Chang, Yanzhen; Liu, Wenbin
2008
Least-squares solution of $$AXB= D$$ over symmetric positive semidefinite matrices $$X$$. Zbl 1029.65042
Liao, Anping; Bai, Zhongzhi
2003
A numerical study of uniform superconvergence of LDG method for solving singularly perturbed problems. Zbl 1212.65422
Xie, Ziqing; Zhang, Zuozheng; Zhang, Zhimin
2009
Eigenvalues and eigenvectors of a matrix dependent on several parameters. Zbl 0618.15009
Sun, Jiguang
1985
Generalized Laguerre approximation and its applications to exterior problems. Zbl 1073.65130
Guo, Benyu; Shen, Jie; Xu, Chenglong
2005
On $$L^2$$ error estimates for weak Galerkin finite element methods for parabolic problems. Zbl 1313.65246
Gao, Fuzheng; Mu, Lin
2014
Compact fourth-order finite difference schemes for Helmholtz equation with high wave numbers. Zbl 1174.65042
Fu, Yiping
2008
A regularized conjugate gradient method for symmetric positive definite system of linear equations. Zbl 1002.65040
Bai, Zhongzhi; Zhang, Shaoliang
2002
Optimal and pressure-independent $$L^2$$ velocity error estimates for a modified Crouzeix-Raviart Stokes element with BDM reconstructions. Zbl 1340.76024
Brennecke, C.; Linke, A.; Merdon, C.; Schöberl, J.
2015
A family of parallel and interval iterations for finding all roots of a polynomial simultaneously with rapid convergence. Zbl 0541.65028
Wang, Xinghua; Zheng, Shiming
1984
A modified HSS iteration method for solving the complex linear matrix equation $$AXB = C$$. Zbl 1374.65077
Zhou, Rong; Wang, Xiang; Zhou, Peng
2016
Superconvergence of DG method for one-dimensional singularly perturbed problems. Zbl 1142.65388
Xie, Ziqing; Zhang, Zhimin
2007
An LQP based interior prediciton-correction method for nonlinear complementarity problems. Zbl 1109.65054
He, Bingsheng; Liao, Lizhi; Yuan, Xiaoming
2006
A posteriori error estimates in Adini finite element for eigenvalue problems. Zbl 0957.65092
Yang, Yidu
2000
Linear convergence of the LZI algorithm for weakly positive tensors. Zbl 1265.65065
Zhang, Liping; Qi, Liqun; Xu, Yi
2012
Exponential Fourier collocation methods for solving first-order differential equations. Zbl 1413.65309
Wang, Bin; Wu, Xinyuan; Meng, Fanwei; Fang, Yonglei
2017
Coercivity of the single layer heat potential. Zbl 0672.65092
Arnold, Douglas N.; Noon, Patrick J.
1989
The mechanical quadrature methods and their extrapolation for solving BIE of Steklov eigenvalue problems. Zbl 1069.65123
Huang, Jin; Lü, Tao
2004
On the minimal nonnegative solution of nonsymmetric algebraic Riccati equation. Zbl 1079.65052
Guo, Xiaoxia; Bai, Zhongzhi
2005
Hermite WENO schemes with Lax-Wendroff type time discretizations for Hamilton-Jacobi equations. Zbl 1142.65403
Qiu, Jianxian
2007
A modified projection and contraction method for a class of linear complementarity problems. Zbl 0854.65047
He, B. S.
1996
Construction of high order symplectic Runge-Kutta methods. Zbl 0787.65053
Sun, Geng
1993
Testing different conjugate gradient methods for large-scale unconstrained optimization. Zbl 1041.65048
Dai, Yuhong; Ni, Qin
2003
A two-level finite element Galerkin method for the nonstationary Navier-Stokes equations. II: Time discretization. Zbl 1137.76413
He, Yinnian; Miao, Huanling; Ren, Chunfeng
2004
ReLU deep neural networks and linear finite elements. Zbl 1463.68072
He, Juncai; Li, Lin; Xu, Jinchao; Zheng, Chunyue
2020
A sparse-grid method for multi-dimensional backward stochastic differential equations. Zbl 1289.65011
Zhang, Guannan; Gunzburger, Max; Zhao, Weidong
2013
Generalized Bernstein-Bézier polynomials. Zbl 0547.41020
Chang, Gengzhe
1983
A Legendre pseudospectral method for solving nonlinear Klein-Gordon equation. Zbl 0876.65073
Li, Xun; Guo, B. Y.
1997
Essentially symplectic boundary value methods for linear Hamiltonian systems. Zbl 0884.65070
Brugnano, L.
1997
A class of asynchronous parallel multisplitting relaxation methods for large sparse linear complementarity problems. Zbl 1047.65041
Bai, Zhongzhi; Huang, Yuguang
2003
A two-level finite element Galerkin method for the nonstationary Navier-Stokes equations. I: Spatial discretization. Zbl 1137.76412
He, Yinnian
2004
Direct minimization for calculating invariant subspaces in density functional computations of the electronic structure. Zbl 1212.81001
Schneider, Reinhold; Rohwedder, Thorsten; Neelov, Alexey; Blauert, Johannes
2009
A coarsening algorithm on adaptive grids by newest vertex bisection and its applications. Zbl 1240.65350
Chen, Long; Zhang, Chensong
2010
Approximation, stability and fast evaluation of exact artificial boundary condition for the one-dimensional heat equation. Zbl 1150.65022
Zheng, Chunxiong
2007
Numerical boundary conditions for the fast sweeping high order WENO methods for solving the eikonal equation. Zbl 1174.65043
Huang, Ling; Shu, Chiwang; Zhang, Mengping
2008
Non-quasi-Newton updates for unconstrained optimization. Zbl 0823.65062
Yuan, Yaxiang; Byrd, Richard H.
1995
A new direct discontinuous Galerkin method with symmetric structure for nonlinear diffusion equations. Zbl 1299.65236
2013
Superconvergence of a discontinuous Galerkin method for first-order linear delay differential equations. Zbl 1249.65162
Li, Dongfang; Zhang, Chengjian
2011
Convergence of Newton’s method for systems of equations with constant rank derivatives. Zbl 1150.49011
Xu, Xiubin; Li, Chong
2007
A posteriori energy-norm error estimates for advection-diffusion equations approximated by weighted interior penalty methods. Zbl 1174.65034
Ern, Alexandre; Stephansen, Annette F.
2008
Multivariate Fourier series over a class of non tensor-product partition domains. Zbl 1030.65143
Sun, Jiachang
2003
The solvability conditions for the inverse problem of bisymmetric nonnegative definite matrices. Zbl 0966.15008
Xie, Dongxiu; Zhang, Lei; Hu, Xiyan
2000
Global superconvergence of the mixed finite element methods for 2-D Maxwell equations. Zbl 1032.65101
Lin, Jiafu; Lin, Qun
2003
An efficient method for computing hyperbolic systems with geometrical source terms having concentrations. Zbl 1119.65373
Jin, Shi; Wen, Xin
2004
Uniform stability and error analysis for some discontinuous Galerkin methods. Zbl 1474.65439
Hong, Qingguo; Xu, Jinchao
2021
Quadrature methods for highly oscillatory singular integrals. Zbl 1474.65056
Gao, Jing; Condon, Marissa; Iserles, Arieh; Gilvey, Benjamin; Trevelyan, Jon
2021
Deep ReLU networks overcome the curse of dimensionality for generalized bandlimited functions. Zbl 07533079
Montanelli, Hadrien; Yang, Haizhao; Du, Qiang
2021
The random batch method for $$N$$-body quantum dynamics. Zbl 07533084
Golse, François; Jin, Shi; Paul, Thierry
2021
A mixed virtual element method for the Boussinesq problem on polygonal meshes. Zbl 1488.65613
Gatica, Gabriel N.; Munar, Mauricio; Sequeira, Filander A.
2021
Constraint-preserving energy-stable scheme for the 2D simplified Ericksen-Leslie system. Zbl 1474.65342
Bao, Xuelian; Chen, Rui; Zhang, Hui
2021
Boundary value methods for Caputo fractional differential equations. Zbl 1474.65237
Zhou, Yongtao; Zhang, Chengjian; Wang, Huiru
2021
Can a cubic spline curve be $${G^3}$$. Zbl 1474.65026
Liu, Wujie; Li, Xin
2021
Stability analysis of the split-step theta method for nonlinear regime-switching jump systems. Zbl 1474.65193
Li, Guangjie; Yang, Qigui
2021
Sub-optimal convergence of discontinuous Galerkin methods with central fluxes for linear hyperbolic equations with even degree polynomial approximations. Zbl 07533065
Liu, Yong; Shu, Chi-Wang; Zhang, Mengping
2021
Well-conditioned frames for high order finite element methods. Zbl 1488.65622
Hu, Kaibo; Winther, Ragnar
2021
Two novel gradient methods with optimal step sizes. Zbl 1488.90134
Oviedo, Harry; Dalmau, Oscar; Herrera, Rafael
2021
ReLU deep neural networks and linear finite elements. Zbl 1463.68072
He, Juncai; Li, Lin; Xu, Jinchao; Zheng, Chunyue
2020
How to prove the discrete reliability for nonconforming finite element methods. Zbl 1463.65362
Carstensen, Carsten; Puttkammer, Sophie
2020
Computational multiscale methods for linear heterogeneous poroelasticity. Zbl 1463.65284
Altmann, Robert; Chung, Eric; Maier, Roland; Peterseim, Daniel; Pun, Saimang
2020
An error analysis method SPP-BEAM and a construction guideline of nonconforming finite elements for fourth order elliptic problems. Zbl 1463.65110
Hu, Jun; Zhang, Shangyou
2020
Two-stage fourth-order accurate time discretizations for 1D and 2D special relativistic hydrodynamics. Zbl 1463.65269
Yuan, Yuhuan; Tang, Huazhong
2020
Convergence of Laplacian spectra from random samples. Zbl 1474.65430
Tao, Wenqi; Shi, Zuoqiang
2020
A high-order accuracy method for solving the fractional diffusion equations. Zbl 1463.65241
Ran, Maohua; Zhang, Chengjian
2020
Discontinuous Galerkin methods and their adaptivity for the tempered fractional (convection) diffusion equations. Zbl 1474.65371
Wang, Xudong; Deng, Weihua
2020
A new approximation algorithm for the matching distance in multidimensional persistence. Zbl 1463.65023
Cerri, Andrea; Frosini, Patrizio
2020
Developable surface patches bounded by NURBS curves. Zbl 1463.65016
Fernandez-Jambrina, Leonardo; Perez-Arribas, Francisco
2020
Convergence and optimality of adaptive mixed methods for Poisson’s equation in the FEEC framework. Zbl 1463.65371
Holst, Michael; Li, Yuwen; Mihalik, Adam; Szypowski, Ryan
2020
Accurate and efficient image reconstruction from multiple measurements of Fourier samples. Zbl 1474.94026
Scarnati, T.; Gelb, Anne
2020
Solution of optimal transportation problems using a multigrid linear programming approach. Zbl 1474.90281
2020
A robust discretization of the Reissner-Mindlin plate with arbitrary polynomial degree. Zbl 1463.65367
Gallistl, Dietmar; Schedensack, Mira
2020
The quadratic Specht triangle. Zbl 1463.65375
Li, Hongliang; Ming, Pingbing; Shi, Zhongci
2020
Local pressure correction for the Stokes system. Zbl 1463.76021
Braack, Malte; Kaya, Utku
2020
An efficient ADER discontinuous Galerkin scheme for directly solving Hamilton-Jacobi equation. Zbl 1463.65292
Duan, Junming; Tang, Huazhong
2020
A balanced oversampling finite element method for elliptic problems with observational boundary data. Zbl 1463.65364
Chen, Zhiming; Tuo, Rui; Zhang, Wenlong
2020
A new stabilized finite element method for solving transient Navier-Stokes equations with high Reynolds number. Zbl 1463.65312
Xie, Chunmei; Feng, Minfu
2020
Image denoising via time-delay regularization coupled nonlinear diffusion equations. Zbl 1463.94006
Ma, Qianting
2020
Efficient linear schemes with unconditional energy stability for the phase field model of solid-state dewetting problems. Zbl 1463.65291
Chen, Jie; He, Zhengkang; Sun, Shuyu; Guo, Shimin; Chen, Zhangxin
2020
Implicity linear collocation method and iterated implicity linear collocation method for the numerical solution of Hammerstein Fredholm integral equations on 2D irregular domains. Zbl 1463.65426
2020
Solving systems of quadratic equations via exponential-type gradient descent algorithm. Zbl 1463.90172
Huang, Meng; Xu, Zhiqiang
2020
On new strategies to control the accuracy of WENO algorithm close to discontinuities. II: Cell averages and multiresolution. Zbl 1463.65213
Amat, Sergio; Ruiz, Juan; Shu, Chiwang
2020
An unfitted $$hp$$-interface penalty finite element method for elliptic interface problems. Zbl 1449.65326
Wu, Haijun; Xiao, Yuanming
2019
Alternating direction implicit schemes for the two-dimensional time fractional nonlinear super-diffusion equations. Zbl 1449.65181
2019
Regularized two-stage stochastic variational inequalities for Cournot-Nash equilibrium under uncertainty. Zbl 1474.91008
Jiang, Jie; Shi, Yun; Wang, Xiaozhou; Chen, Xiaojun
2019
Unconditionally superclose analysis of a new mixed finite element method for nonlinear parabolic equations. Zbl 1438.65292
Shi, Dongyang; Yan, Fengna; Wang, Junjun
2019
Improved relaxed positive-definite and skew-Hermitian splitting preconditioners for saddle point problems. Zbl 1438.65035
Cao, Yang; Ren, Zhiru; Yao, Linquan
2019
On the validity of the local Fourier analysis. Zbl 1449.65374
Rodrigo, Carmen; Gaspar, Francisco J.; Zikatanov, Ludmil T.
2019
Stabilized Barzilai-Borwein method. Zbl 1463.65135
Burdakov, Oleg; Dai, Yuhong; Huang, Na
2019
$${C^0}$$ discontinuous Galerkin methods for a plate frictional contact problem. Zbl 1463.74094
Wang, Fei; Zhang, Tianyi; Han, Weimin
2019
A discontinuous Galerkin method by patch reconstruction for biharmonic problem. Zbl 1449.65321
Li, Ruo; Ming, Pingbing; Sun, Zhiyuan; Yang, Fanyi; Yang, Zhijian
2019
A unified algorithmic framework of symmetric Gauss-Seidel decomposition based proximal ADMMs for convex composite programming. Zbl 1463.90154
Chen, Liang; Sun, Defeng; Toh, Kim Chuan; Zhang, Ning
2019
Ryu, Ernest K.; Yin, Wotao
2019
Superconvergence analysis for time-fractional diffusion equations with nonconforming mixed finite element method. Zbl 1449.65266
Zhang, Houchao; Shi, Dongyang
2019
Numerical analysis of elliptic hemivariational inequalities for semipermeable media. Zbl 1449.65137
Han, Weimin; Huang, Ziping; Wang, Cheng; Xu, Wei
2019
A decoupling two-grid method for the steady-state Poisson-Nernst-Planck equations. Zbl 1449.65329
Yang, Ying; Lu, Benzhuo; Xie, Yan
2019
A first-order splitting method for solving a large-scale composite convex optimization problem. Zbl 1449.90292
Tang, Yuchao; Wu, Guorong; Zhu, Chuanxi
2019
Tackling industrial-scale supply chain problems by mixed-integer programming. Zbl 1463.90001
Gamrath, Gerald; Gleixner, Ambros; Koch, Thorsten; Miltenberger, Matthias; Kniasew, Dimitri; Schlogel, Dominik; Martin, Alexander; Weninger, Dieter
2019
The high order block RIP condition for signal recovery. Zbl 1438.94033
Li, Yaling; Chen, Wengu
2019
Singularity-free numerical scheme for the stationary Wigner equation. Zbl 1438.65272
Lu, Tiao; Sun, Zhangpeng
2019
Extrapolation methods for computing Hadamard finite-part integral on finite intervals. Zbl 1438.65029
Li, Jin; Rui, Hongxing
2019
The factorization method for a mixed scattering problem from a bounded obstacle and an open arc. Zbl 1449.78013
Wu, Qinghua; Zeng, Meilan; Xiong, Wentao; Yan, Guozheng; Guo, Jun
2019
Numerical solutions of nonautonomous stochastic delay differential equations by discontinuous Galerkin methods. Zbl 1449.65005
Dai, Xinjie; Xiao, Aiguo
2019
The structure-preserving methods for the Degasperis-Procesi equation. Zbl 1449.65353
Zhang, Yuze; Wang, Yushun; Yang, Yanhong
2019
A fourth-order compact and conservative difference scheme for the generalized Rosenau-Korteweg de Vries equation in two dimensions. Zbl 1449.65200
Wang, Jue; Zeng, Qingnan
2019
Uniformly convergent nonconforming tetrahedral element for Darcy-Stokes problem. Zbl 1438.65284
Dong, Lina; Chen, Shaochun
2019
Parareal algorithms applied to stochastic differential equations with conserved quantities. Zbl 1438.60096
Zhang, Liying; Zhou, Weien; Ji, Lihai
2019
Descent direction stochastic approximation algorithm with adaptive step sizes. Zbl 1438.62151
Luzanin, Zorana; Stojkovska, Irena; Kresoja, Milena
2019
A general class of one-step approximation for index-1 stochastic delay-differential-algebraic equations. Zbl 1438.60093
Qin, Tingting; Zhang, Chengjian
2019
An improved variational model and its numerical solutions for speckle noise removal from real ultrasound images. Zbl 1438.92041
2019
Exponential integrators for stochastic Schrödinger equations driven by Itô noise. Zbl 1413.65006
Anton, Rikard; Cohen, David
2018
A weak Galerkin finite element method for the linear elasticity problem in mixed form. Zbl 1424.74046
Wang, Ruishu; Zhang, Ran
2018
Weak error estimates for trajectories of SPDEs under spectral Galerkin discretization. Zbl 1413.65008
Brehier, Charles Edouard; Hairer, Martin; Stuart, Andrew M.
2018
The reconstruction of obstacles in a waveguide using finite elements. Zbl 1413.78006
Zhang, Ruming; Sun, Jiguang
2018
A survey of open cavity scattering problems. Zbl 1413.78005
Li, Peijun
2018
A BIE-based DtN-FEM for fluid-solid interaction problems. Zbl 1413.74097
Yin, Tao; Rathsfeld, Andreas; Xu, Liwei
2018
A first-order numerical scheme for forward-backward stochastic differential equations in bounded domains. Zbl 1413.65012
Yang, Jie; Zhang, Guannan; Zhao, Weidong
2018
On doubly positive semidefinite programming relaxations. Zbl 1413.90197
Fu, Taoran; Ge, Dongdong; Ye, Yinyu
2018
Transformations for the prize-collecting Steiner tree problem and the maximum-weight connected subgraph problem to sap. Zbl 1413.90228
Rehfeldt, Daniel; Koch, Thorsten
2018
High order compact multisymplectic scheme for coupled nonlinear Schrödinger-KdV equations. Zbl 1424.65244
Wang, Lan; Wang, Yushun
2018
Decoupled energy stable scheme for hydrodynamic Allen-Cahn phase field moving contact line model. Zbl 1424.76022
Chen, Rui; Yang, Xiaofeng; Zhang, Hui
2018
An over-penalized weak Galerkin method for second-order elliptic problems. Zbl 1424.65221
Liu, Kaifang; Song, Lunji; Zhou, Shuangfeng
2018
Fast spectral Galerkin method for logarithmic singular equations on a segment. Zbl 1413.65485
Jerez-Hanckes, Carlos; Nicaise, Serge; Urzua, Torres Carolina
2018
Eigenvalues of the Neumann-Poincare operator for two inclusions with contact of order $$m$$: a numerical study. Zbl 1413.65417
Bonnetier, Eric; Triki, Faouzi; Tsou, Chun Hsiang
2018
Analysis of multi-index Monte Carlo estimators for a Zakai SPDE. Zbl 1413.65010
Reisinger, Christoph; Wang, Zhenru
2018
A fast stochastic Galerkin method for a constrained optimal control problem governed by a random fractional diffusion equation. Zbl 1413.65421
Du, Ning; Shen, Wanfang
2018
A sparse grid stochastic collocation and finite volume element method for constrained optimal control problem governed by random elliptic equations. Zbl 1413.65408
Ge, Liang; Sun, Tongjun
2018
Parallel stochastic Newton method. Zbl 1413.65247
Mutny, Mojmir; Richtarik, Peter
2018
Heterogeneous multiscale method for optimal control problem governed by elliptic equations with highly oscillatory coefficients. Zbl 1424.49024
Ge, Liang; Yan, Ningning; Wang, Lianhai; Liu, Wenbin; Yang, Danping
2018
On effective stochastic Galerkin finite element method for stochastic optimal control governed by integral-differential equations with random coefficients. Zbl 1413.65431
Shen, Wanfang; Ge, Liang
2018
A trust-region-based alternating least-squares algorithm for tensor decompositions. Zbl 1413.90178
Jiang, Fan; Han, Deren; Zhang, Xiaofei
2018
A complete characterization of the robust isolated calmness of nuclear norm regularized convex optimization problems. Zbl 1413.90199
Cui, Ying; Sun, Defeng
2018
High order stable multi-domain hybrid RKDG and WENO-FD methods. Zbl 1424.65175
Zhang, Fan; Liu, Tiegang; Cheng, Jian
2018
Optimal quadratic Nitsche extended finite element method for interface problem of diffusion equation. Zbl 1424.65226
Wang, Fei; Zhang, Shuo
2018
A full discrete stabilized method for the optimal control of the unsteady Navier-Stokes equations. Zbl 1424.49025
Qin, Yanmei; Chen, Gang; Feng, Minfu
2018
Exponential Fourier collocation methods for solving first-order differential equations. Zbl 1413.65309
Wang, Bin; Wu, Xinyuan; Meng, Fanwei; Fang, Yonglei
2017
A second-order convex splitting scheme for a Cahn-Hilliard equation with variable interfacial parameters. Zbl 1413.65326
Li, Xiao; Qiao, Zhonghua; Zhang, Hui
2017
Error estimates of finite element methods for stochastic fractional differential equations. Zbl 1399.65212
Li, Xiaocui; Yang, Xiaoyuan
2017
A decoupling method with different subdomain time steps for the non-stationary Navier-Stokes/Darcy model. Zbl 1399.65254
Jia, Huiyong; Shi, Peilin; Li, Kaitai; Jia, Hongen
2017
On PMHSS iteration methods for continuous Sylvester equations. Zbl 1413.65130
Dong, Yongxin; Gu, Chuanqing
2017
A multigrid semismooth Newton method for semilinear contact problems. Zbl 1413.65447
Ulbrich, Michael; Ulbrich, Stefan; Bratzke, Daniela
2017
Finite element exterior calculus for evolution problems. Zbl 1399.65252
Gillette, Andrew; Holst, Michael; Zhu, Yunrong
2017
ExtraPush for convex smooth decentralized optimization over directed networks. Zbl 1413.90211
Zeng, Jinshan; Yin, Wotao
2017
The alternating direction methods for solving the Sylvester-type matrix equation $$AXB + CX^{\text{T}}D = E^*$$. Zbl 1413.65140
Ke, Yifen; Ma, Changfeng
2017
A linearly-fitted conservative (dissipative) scheme for efficiently solving conservative (dissipative) nonlinear wave PDEs. Zbl 1413.65369
Liu, Kai; Wu, Xinyuan; Shi, Wei
2017
Local structure-preserving algorithms for the KdV equation. Zbl 1413.65459
Wang, Jialing; Wang, Yushun
2017
...and 1038 more Documents
all top 5
### Cited by 6,627 Authors
95 Shi, Dongyang 60 Huang, Yunqing 59 Guo, Ben-Yu 55 Chen, Yanping 47 Ma, Changfeng 38 Zhang, Zhimin 34 Dehghan Takht Fooladi, Mehdi 33 Shang, Yueqiang 32 Hu, Jun 31 Zhang, Shangyou 29 He, Yinnian 29 Shu, Chi-Wang 28 Carstensen, Carsten 28 Xu, Jinchao 28 Zhang, Chengjian 28 Zhang, Guofeng 26 Brenner, Susanne Cecelia 25 Bai, Zhongzhi 25 Han, Houde 25 Wang, Wansheng 25 Wang, Yushun 24 Chen, Shaochun 24 Gatica, Gabriel N. 24 Wang, Zhongqing 23 Bao, Weizhu 23 Li, Shoufu 23 Petković, Miodrag S. 23 Wang, Xiang 22 Cao, Yang 22 Chen, Long 22 Lin, Qun 22 Qiu, Jianxian 22 Wu, Yujiang 21 Luo, Zhendong 21 Shi, Zhongci 21 Zhang, Jiwei 20 Tang, Yifa 20 Wang, Renhong 20 Xie, Xiaoping 20 Yang, Yidu 20 Zhang, Luming 19 Mao, Shipeng 19 Sayas, Francisco-Javier 19 Wang, Tianjun 19 Yang, Aili 19 Yang, Danping 19 Zhang, Shuo 18 Argyros, Ioannis Konstantinos 18 Du, Qiang 18 Huang, Ting-Zhu 18 Li, Peijun 18 Nelakanti, Gnaneshwar 18 Qin, Mengzhao 18 Yan, Ningning 18 Zheng, Chunxiong 17 Xu, Yan 17 Zhou, Aihui 16 Brunner, Hermann 16 Feng, Minfu 16 Huang, Zhongyi 16 Ju, Lili 16 Li, Yonghai 16 Li, Zi-Cai 16 Pan, Kejia 16 Wang, Yuanming 16 Xie, Hehu 15 Baccouch, Mahboub 15 Bi, Hai 15 Du, Guangzhi 15 Hajarian, Masoud 15 Han, Weimin 15 He, Wenming 15 Hong, Jialin 15 Huang, Xuehai 15 Merdon, Christian 15 Qi, Liqun 15 Shu, Shi 15 Sun, Wenyu 15 Wei, Yimin 15 Wen, Liping 15 Wu, Qingbiao 15 Zhang, Tie 14 Bao, Gang 14 Dai, Hua 14 Huang, Chengming 14 Huang, Qiumei 14 Li, Jichun 14 Linke, Alexander 14 Wang, Qingwen 14 Ye, Xiu 14 Yin, Junfeng 14 Yuan, Jinyun 14 Zhu, Detong 14 Zhu, Qiding 13 Chen, Guoliang 13 Chen, Jinru 13 Gong, Wei 13 Guo, Hui 13 Huang, Jin 13 Huang, Zhengge ...and 6,527 more Authors
all top 5
### Cited in 425 Journals
482 Journal of Computational and Applied Mathematics 478 Applied Mathematics and Computation 328 Computers & Mathematics with Applications 300 Journal of Scientific Computing 264 Journal of Computational Physics 241 Applied Numerical Mathematics 152 Numerical Algorithms 141 Computer Methods in Applied Mechanics and Engineering 137 Mathematics of Computation 115 Numerische Mathematik 102 Linear Algebra and its Applications 96 Computational and Applied Mathematics 86 SIAM Journal on Scientific Computing 86 Advances in Computational Mathematics 81 Numerical Methods for Partial Differential Equations 79 SIAM Journal on Numerical Analysis 75 Advances in Applied Mathematics and Mechanics 67 Acta Mathematicae Applicatae Sinica. English Series 65 BIT 65 Science China. Mathematics 62 Journal of Mathematical Analysis and Applications 61 International Journal of Computer Mathematics 60 Communications in Computational Physics 59 Journal of Optimization Theory and Applications 55 Applied Mathematics Letters 48 Mathematical Problems in Engineering 48 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 47 Applied Mathematics and Mechanics. (English Edition) 47 Computational Optimization and Applications 45 Calcolo 45 Journal of Applied Mathematics and Computing 42 Journal of Inequalities and Applications 40 East Asian Journal on Applied Mathematics 32 Mathematics and Computers in Simulation 32 Science in China. Series A 32 Abstract and Applied Analysis 32 Computational Methods in Applied Mathematics 30 Applied Mathematical Modelling 29 Journal of Applied Mathematics 29 International Journal of Numerical Analysis and Modeling 28 Applications of Mathematics 27 Numerical Linear Algebra with Applications 27 Advances in Difference Equations 26 Computers and Fluids 26 Engineering Analysis with Boundary Elements 25 Mathematical Methods in the Applied Sciences 23 Frontiers of Mathematics in China 21 Inverse Problems 21 Linear and Multilinear Algebra 21 Communications in Nonlinear Science and Numerical Simulation 19 Computing 19 Computer Aided Geometric Design 19 Japan Journal of Industrial and Applied Mathematics 19 Applied Mathematics. Series B (English Edition) 19 Journal of Systems Science and Complexity 18 Journal of Computational Mathematics 17 Mathematical and Computer Modelling 17 Acta Mathematica Sinica. English Series 17 SIAM Journal on Imaging Sciences 16 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 16 Journal of Global Optimization 16 Mathematical Programming. Series A. Series B 16 Multiscale Modeling & Simulation 15 Applicable Analysis 15 Discrete and Continuous Dynamical Systems. Series B 15 Boundary Value Problems 14 International Journal for Numerical Methods in Engineering 13 SIAM Journal on Optimization 13 ETNA. Electronic Transactions on Numerical Analysis 13 Optimization Methods & Software 13 Discrete Dynamics in Nature and Society 13 Nonlinear Analysis. Real World Applications 13 Communications on Applied Mathematics and Computation 12 Journal of Approximation Theory 12 Journal of Differential Equations 12 Physica D 12 European Journal of Operational Research 12 Filomat 12 Optimization Letters 11 Journal of the Franklin Institute 11 Journal of Integral Equations and Applications 11 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 11 Journal of Mathematics 11 International Journal of Applied and Computational Mathematics 11 AIMS Mathematics 11 Results in Applied Mathematics 10 Computer Physics Communications 10 Applied and Computational Harmonic Analysis 10 Lobachevskii Journal of Mathematics 10 Journal of the Operations Research Society of China 9 Wave Motion 9 SIAM Journal on Control and Optimization 9 Computational Mechanics 9 SIAM Journal on Matrix Analysis and Applications 9 International Journal of Nonlinear Sciences and Numerical Simulation 9 Comptes Rendus. Mathématique. Académie des Sciences, Paris 9 International Journal of Computational Methods 9 Mediterranean Journal of Mathematics 9 Journal of Industrial and Management Optimization 9 Inverse Problems in Science and Engineering ...and 325 more Journals
all top 5
### Cited in 57 Fields
4,823 Numerical analysis (65-XX) 1,812 Partial differential equations (35-XX) 838 Fluid mechanics (76-XX) 626 Operations research, mathematical programming (90-XX) 394 Linear and multilinear algebra; matrix theory (15-XX) 379 Calculus of variations and optimal control; optimization (49-XX) 359 Mechanics of deformable solids (74-XX) 302 Ordinary differential equations (34-XX) 271 Integral equations (45-XX) 212 Optics, electromagnetic theory (78-XX) 175 Approximations and expansions (41-XX) 161 Operator theory (47-XX) 131 Dynamical systems and ergodic theory (37-XX) 112 Probability theory and stochastic processes (60-XX) 104 Computer science (68-XX) 96 Statistical mechanics, structure of matter (82-XX) 94 Biology and other natural sciences (92-XX) 90 Information and communication theory, circuits (94-XX) 76 Mechanics of particles and systems (70-XX) 67 Systems theory; control (93-XX) 58 Harmonic analysis on Euclidean spaces (42-XX) 53 Real functions (26-XX) 51 Quantum theory (81-XX) 43 Functions of a complex variable (30-XX) 42 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 39 Classical thermodynamics, heat transfer (80-XX) 36 Special functions (33-XX) 35 Combinatorics (05-XX) 33 Statistics (62-XX) 31 Global analysis, analysis on manifolds (58-XX) 29 Geophysics (86-XX) 26 Potential theory (31-XX) 20 Algebraic geometry (14-XX) 19 Functional analysis (46-XX) 18 Difference and functional equations (39-XX) 16 Field theory and polynomials (12-XX) 15 Differential geometry (53-XX) 11 General and overarching topics; collections (00-XX) 11 Integral transforms, operational calculus (44-XX) 9 Convex and discrete geometry (52-XX) 8 Number theory (11-XX) 5 Astronomy and astrophysics (85-XX) 4 History and biography (01-XX) 4 Commutative algebra (13-XX) 3 Associative rings and algebras (16-XX) 3 Topological groups, Lie groups (22-XX) 3 Sequences, series, summability (40-XX) 3 Geometry (51-XX) 3 Algebraic topology (55-XX) 3 Relativity and gravitational theory (83-XX) 2 Mathematical logic and foundations (03-XX) 2 Nonassociative rings and algebras (17-XX) 2 Manifolds and cell complexes (57-XX) 1 Group theory and generalizations (20-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Abstract harmonic analysis (43-XX) 1 General topology (54-XX)
|
{}
|
Question-and-Answer Resource for the Building Energy Modeling Community
Get s tarted with the Help page
# hourly heating and cooling loads for one year
Is there a way to get heating and cooling loads aggregated as a total value for each hour in one year?
edit retag close merge delete
2
Please add the software you are using (from your previous question, I believe you are using openstudio). Also, be judicious in your use of keywords. For instance, simulation is not a good keyword, because it essentially applies to every question on this site. :)
( 2016-05-17 08:36:39 -0600 )edit
thank you very much for your reply, the information and your help. yes, i'm using openstudio to run the simulation, i could get the thermal loads for one day through simulation over both open studio and Energyplus but i get it in a 10 min period for one day yet i still need to get the hourly loads for one year. here are the simulation files, link text Best regards.
( 2016-05-17 21:45:08 -0600 )edit
1
Your link is to the generic https://drive.google.com/drive/my-drive so we can't see your attachment.
( 2016-05-18 01:31:09 -0600 )edit
link textlink textlink text can you please check the files now ? i could get the loads for one day but can't get the annual one? and i don't know what day is that! regards
( 2016-05-19 04:58:17 -0600 )edit
Can you make a clear effort to ask clearer questions please? What you're asking is probably very easy, but this is so tedious... We shouldn't have to guess what you're asking and have to answer 4 different questions in the hope that we'll hit what you want.
( 2016-05-23 02:54:37 -0600 )edit
Sort by » oldest newest most voted
Do you want:
• The total cooling and heating load (as in energy) for one year = 2 numbers
• For this, if the variable is listed as "HVAC Sum" - which is generally the case for any variable reported in Joules - you can report the variable as RunPeriod like @Waseem said
• Each individual hourly load value for both cooling and heating (as in demand) = 8760 * 2 numbers.
• For this, report the variable at timestep Hourly.
Second:
• Are you asking about what variable to report? If so, it's been asked several times already, see Best output:variable for zone heating / cooling load for example.
• Or are you asking how you can add the specific variable within openstudio when you don't see it in the list? First, both of the variables mentioned in the post I just linked are available right away... Anyways, there's a measure for that, go check the BCL, by typing "add output" you see plenty of measures
Finally, a couple of points:
• Make sure you are actually running the simulation for a whole year: under "Simulation Settings" tab, put RunPeriod from Jan 1 to Dec 31, and make sure you have checked "Run Simulation for Weather File Run Periods"
• The eplusssz.csv and epluszsz.csv are generated for the sizing periods only. Your outputs will be in the SQL file and the .eso file. If you want it to be generated in csv, just do a search on unmet hours it's been covered more than once (hint: there's a measure to generate a csv export from a meter (see BCL link I gave you...), or you can run the idf in EP-Launch, or you can learn how to query the SQL file).
• The file that you linked, I don't see any problem with it
more
thank you very much for your answer and the highly appreciated information you are sharing with me. yes what i needed to do was to get Each individual hourly load value for both cooling and heating (as in demand) = 8760 * 2 numbers while i didn't know the best output variable to use for zone cooling and heating loads and now i know thanks to you.
( 2016-05-24 00:53:52 -0600 )edit
I think what you have to do is to change the output option to runperiod (this will output runperiod energy consumption e.g. annual energy consumption if you are running annual simulation). Please see picture below. From your csv files it looks like you are outputting daily consumption and other parameters at every timestep.
more
Picture is missing :)
( 2016-05-20 12:40:57 -0600 )edit
oops, just added it (though it is not of very high quality but I assume it would be OK)
( 2016-05-20 14:42:26 -0600 )edit
thanks for your answer but this is not what i'm looking for, in this list there is no option to get loads as a total number for heating and cooling, it's about very specific loads which i currently don't need. i need a way to get a file just like the one i shared but hourly and given for one year. regards
( 2016-05-21 03:15:56 -0600 )edit
But basically what you want is annual energy consumption for heating and cooling.
more
2
( 2016-05-20 12:40:40 -0600 )edit
no, i don't need the annual heating and cooling, i need the hourly one for one year period. check the file i shared above. i need to do a one like it that will run for a year not one day. regards
( 2016-05-21 03:17:53 -0600 )edit
|
{}
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# Find the LCM of $\sqrt{343}$.
Last updated date: 24th Mar 2023
Total views: 307.5k
Views today: 7.85k
Verified
307.5k+ views
Hint: Find the LCM of no. using prime factorization method. Break down the no. 343 as a multiple of prime no. Then take the root of the prime no.’s to get the LCM of $\sqrt{343}$.
LCM is the least common multiple; it is also referred to as the lowest common multiple.
For 2 integers a and b, denoted by LCM(a,b), the LCM is the smallest positive integer that is evenly divisible by both a and b. For example LCM(2,3)=6 and LCM(6,10)=30
We can take LCM by prime factorization method.
The method is to break down or express a given no. as a product of prime number. Where prime no. is a whole no. which is greater than 1, it is only divisible by 1 and itself.
$\therefore$LCM of $343=7\times 7\times 7$
where 7 is a prime no.
$\therefore \sqrt{343}=\sqrt{7\times 7\times 7}=7\sqrt{7}$
Note: LCM can be found by listing the factors, other than by prime factorization,
So, $343=7\times 7\times 7$
|
{}
|
### Client/Server muzzleVector/Point mismatch.
Expanding and utilizing the engine via C++.
• 1
• 2
#### Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
The values returned from the muzzle of a gun are different between Server and Client - see picture:
Anybody know why that is?
#### Re: Client/Server muzzleVector/Point mismatch.
Duion
Posts: 1009
Joined: Sun Feb 08, 2015 1:51 am
What exactly are we seeing here and how are you testing the muzzle vector?
Needs more information on this.
#### Re: Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
What you're seeing is a blue line, drawn from the muzzlePoint (getRenderMuzzlePoint ) along a Vector (getRenderMuzzleVector) called on the Client, and a red line - same as the blue line - but called on the Server:
Code: Select all
//--------------------------------------------------------------- void LaserBeam::ProcessTick(const Move *move) { ... _drawDebug(); ... } //-------------------------------------------- void LaserBeam::renderObject(...) { _drawDebug(); } //----------------------------------------------------------------------------------- void LaserBeam::_drawDebug() { ... VectorF _muzzleVec; Point3F _muzzlePoint, _endPosition ; mSourceObject->getRenderMuzzleVector(mSourceObjectSlot, &_muzzleVec); mSourceObject->getRenderMuzzlePoint(mSourceObjectSlot, &_muzzlePoint); _endPosition = _muzzleVec* 50; _endPosition += _muzzlePoint; ColorF clientCol (0, 0, 1); //On Client: Draw line in Blue. if(isServerObject()) clientCol.set(1, 0, 0); //On Server: Draw line in Red. DebugDrawer::get()->drawLine(_muzzlePoint, _endPosition, clientCol); DebugDrawer::get()->setLastTTL(TickMs); .... }
Last edited by LoLJester on Tue Mar 06, 2018 4:55 pm, edited 1 time in total.
#### Re: Client/Server muzzleVector/Point mismatch.
Duion
Posts: 1009
Joined: Sun Feb 08, 2015 1:51 am
Well there are different kinds of muzzle points and vectors, for example there are
Code: Select all
correctMuzzleVector = true; correctMuzzleVectorTP = true;
In the weapon datablock, so you have a first person and a third person muzzle vector, additionally they can be corrected, depending if you want to use the true muzzle vector of the weapon or not and your debug function may use a different kind of vector.
#### Re: Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
Thanks Duion, but the correctMuzzleVector(s) for first and third person work fine. The problem originates somewhere deeper than that.There seems to be a muzzleNode position packUpdate/unpackUpdate discrepancy somewhere in the Player class or one of the parent classes. I first thought that the nodes don't get animated on the server, but they do. The image I posted shows exactly what I mean. On the Server (the Red line), the muzzleNode position is lower from the Client (the blue line) by somewhere around 0.1f for some reason. Any other ideas why?
#### Re: Client/Server muzzleVector/Point mismatch.
Duion
Posts: 1009
Joined: Sun Feb 08, 2015 1:51 am
How is that relevant now? I mean bullets can only use one or the other, which vector is used by the bullets that are fired? I would assume it uses the blue line for bullets.
#### Re: Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
They are the same vector. Imagine the lines are like raycasts, but only the red line (Server) can detect and call the onCollision function in script: see how the red line is bent downwards and does not collide where the blue line does?
Has anybody else had this problem and may have a fix?
#### Re: Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
Despite all the views... nobody else?
#### Re: Client/Server muzzleVector/Point mismatch.
Jason Campbell
Posts: 269
Joined: Fri Feb 13, 2015 2:51 am
Sorry Jester, I don't have any way of testing anything for you. Could it be the implementation of the weapons pack?
#### Re: Client/Server muzzleVector/Point mismatch.
LoLJester
Posts: 65
Joined: Thu Aug 13, 2015 5:58 pm
How do you mean? The weapon pack seems to work fine. The muzzle node is placed properly as you can see with the blue line.
• 1
• 2
#### Who is online
Users browsing this forum: No registered users and 1 guest
|
{}
|
Question
# lf $$\mathrm{f}({x})=\sqrt{1-\sqrt{1-{x}^{2}}}$$, then $$\mathrm{f}({x})$$ is
A
Continuous on [1,1] and differentiable on (1,1)
B
Continuous on [1,1] and differentiable on (1,0)(0,1)
C
Continuous and differentiable on [1,1]
D
Continuous and differentiable on (1,1)
Solution
## The correct option is A Continuous on $$[-1,1]$$ and differentiable on $$(-1,1)$$$$f(x)\quad =\sqrt { 1-\sqrt { 1-{ x }^{ 2 } } } \quad for\quad x\in (-1,1)\\ let\quad x\quad =\quad sin(t)\quad for\quad t\quad \in \quad (\frac { -\pi }{ 2 } ,\frac { \pi }{ 2 } )\\ f(t)\quad =\quad \sqrt { 1-cos(t) } =\quad \sqrt { 2 } sin(t/2)\quad \\ The\quad above\quad function\quad is\quad continuous\quad as\quad well\quad as\quad differentiable\quad for\quad x\quad \in \quad (-1,1)$$Hence, the answer is "A"Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
{}
|
# Does the ordered pair (-3,2) satisfy both x+y=-1 and 2x+5y=4?
Sep 16, 2016
Yes
#### Explanation:
We have: $x + y = - 1$ and $2 x + 5 y = 4$ for $\left(- 3 , 2\right)$
We can check whether this ordered pair satisfies the equations by substituting them into the respective equations:
$\implies \left(- 3\right) + \left(2\right) = - 1$
$\implies - 3 + 2 = - 1$
$\implies - 1 = - 1$
and
$\implies 2 \left(- 3\right) + 5 \left(2\right) = 4$
$\implies - 6 + 10 = 4$
$\implies 4 = 4$
Therefore, the ordered pair satisfies the equations.
|
{}
|
You have accessReview articles
Modelling of framework materials at multiple scales: current practices and open questions
Abstract
The last decade has seen an explosion of the family of framework materials and their study, from both the experimental and computational points of view. We propose here a short highlight of the current state of methodologies for modelling framework materials at multiple scales, putting together a brief review of new methods and recent endeavours in this area, as well as outlining some of the open challenges in this field. We will detail advances in atomistic simulation methods, the development of material databases and the growing use of machine learning for the prediction of properties.
1. Introduction
Nanoporous materials with high specific surface area are extensively used in a wide range of applications, including catalysis, ion exchange, gas storage, gas or liquid separations, sensing and detection, electronics and drug delivery. The last 15 years have seen the emergence of entire new classes of crystalline nanoporous materials, based on weaker bonds (coordination bonds, ππ stacking, hydrogen bonds, …). The most studied of these new materials are the metal-organic frameworks (MOFs): these nanoporous hybrid organic–inorganic materials, built from metal centres interconnected by organic linkers, have been the subject of an intensive research effort since the pioneering work done by R. Robson in the 1990s, with thousands of structures synthesized. Other classes of crystalline nanoporous materials that have emerged in the past decade include covalent organic frameworks, porous molecular organic solids and other porous molecular framework materials.
Among these nanoporous materials, an interesting family of materials has recently started to emerge, named ‘stimuli-responsive materials’ or ‘soft porous crystals’ [1], which exhibit large or anomalous responses to external physical or chemical stimulation [2]. These modifications of framework structure and pore dimensions also involve, in turn, a modification of other physical and chemical properties, making such materials multifunctional (or ‘smart materials’). Stimuli-reactive crystals include a wide diversity of eye-catching phenomena such as negative adsorption [3], negative linear compressibility or negative area compressibility [4], pressure-induced bond rearrangement and framework topology changes [5], photoresponsive frameworks [6] and intrusion-induced polymorphism [7], to name a few. Each of these properties can be leveraged for applications in several fields, for example to make sensors and actuators, to store mechanical energy, to engineer composite materials with targeted mechanical and thermal properties, etc.
Soft framework materials, because they are built from weaker interactions, have large-scale complex supramolecular architectures, and can exhibit many dynamic phenomena such as those just described, are a particular challenge in terms of computational modelling. Compared to ‘traditional’ dense materials, such as oxides, they can require additional computational power (due to the increased time and length scales involved), or even novel simulation methodologies. In this paper, we propose a brief review of new methods and recent endeavours in this area, of the perspectives opened, as well as outline some of the open challenges in this field. We will first detail recent advances in atomistic simulation methods for framework materials, going beyond structural properties of perfect crystals to address their behaviour under stimulation and in a large range of working conditions, as well as the emergence of defects and disordered phases. We will then highlight the recent development of material databases, and within this the specific place of framework materials. Finally, our last section will focus on the growing use of machine learning techniques for the prediction of complex material structures and properties.
2. Computational methods for framework materials
(a) Classical and ab initio simulations
If one wants to understand the properties and behaviour of a crystalline material using computational methods, the usual starting point is to compute ‘static’ properties of the perfect infinite crystal, using quantum chemistry methods, such as Kohn–Sham density functional theory. Starting from an energy-minimized (relaxed) structure, researchers can then compute zero Kelvin properties, at or around that energy minimum: structural and electronic properties, such as the band gap and the band structure; vibrations of the atoms around their equilibrium position, computed as phonons; and infinitesimal deformations of the system can yield elastic properties. For ‘traditional’ materials, such as oxides, metallic alloys or other dense inorganic materials, most of the behaviour and properties of a system can be computed using such methodology. In stark contrast, for complex framework materials with highly dynamic behaviour, this might not be enough and one has to resort to more complex and more demanding simulation methods. Specifically, for soft porous materials, their dynamic properties and response to various external stimuli play a crucial part in their properties and possible applications. In this case, exploring the behaviour of the system in the vicinity of its energy-minimal structure is not sufficient, and molecular dynamics (MD) simulations can be necessary to adequately describe the behaviour of the material—as well as providing important insights into the atomistic processes governing the macroscopic behaviour.
The so-called classical MD simulations, relying on parameterized force-fields to represent intra- and intermolecular interactions, have the advantage of being usable for big simulations, either in the duration of the simulated events or the size of the system. This means that we can study rare events such as crystal nucleation or reactions—as well as systems where a large simulation box is needed, for example the effects of disorder and defects (a topic which we will discuss more below). The issue here is that there are very few reliable, well-tested and transferable force-fields for use with framework materials. One has to choose between: (i) force fields derived for a single material, which describe the potential energy surface of the system with high accuracy, but are not transferable to other materials; (ii) generic force-fields, whose analytical expressions and parameters are transferable among a large class of material, but that poorly reproduce physical properties. The second approach has been widely used, by relying on generic force-fields such as AMBER [8] or UFF [9]—possibly with adjustments or extensions—to get a consistent treatment of all frameworks, and therefore to compare different materials when searching out the best candidate for a given application in high-throughput studies [1012]. One problem arising from this approach is that these force-fields might not contain adequate terms to describe the delicate balance of intra- and intermolecular interactions in framework materials. In particular, one can think of the metal coordination bonds, ππ stacking and other soft intermolecular interactions. On the other hand, deriving new force-fields for a specific systems, while useful to investigate the behaviour of a given material thanks to higher accuracy of the potential energy surface, fails to allow for comparisons with other systems and is not suitable for large-scale screening.
Another choice of methodology is to use an ab initio description of the interactions in the system, where a quantum chemistry method is used at every time step of the MD simulation—this approach is also called first-principles molecular dynamics (FPMD). This has a much higher computational cost, and thus limits the length and time scales that can be reached, but does not make any assumption on the nature of the interactions. This was used by Chaplais et al. [13] to describe how the adsorbed phase arranges inside a fully flexible ZIF-8, without needing to create a classical force-field that would be able to reproduce the full flexibility of ZIF-8. Furthermore, FPMD allows the description of bond breaking and formation, which can be crucial in some dynamic phenomena: as an example, Howe et al. [14] used it to analyse the stability of MOFs in the presence of water.
We note that the question of the ‘level of description’ applied to the systems (quantum chemistry versus empirical potentials) is relevant not only for MD but also for Monte Carlo simulations, which stochastically generate representative configurations of the system in a given thermodynamic ensemble, by the application of random moves weighted by the appropriate Boltzmann probabilities. However, while ab initio Monte Carlo simulations are possible, the large number of energy evaluations necessary make them relatively rare in the literature [15,16]. In the context of framework materials, Monte Carlo simulations are used at various scales. First, simulated annealing and biased Monte Carlo simulations are extensively used in the areas of structure solution and for localization of extra-framework ions and adsorbed species [17,18]. Secondly, Grand Canonical Monte Carlo is very often used to describe the thermodynamics of adsorption of fluids and fluid mixtures in nanoporous frameworks [19]. Finally, mesoscale Monte Carlo modelling methods can be used to assess the large-scale ordering (or disorder) in supramolecular frameworks, based on carefully constructed Hamiltonians that describe the local interactions [2022].
(b) Make force-fields great again
Despite the rather strong limitations of force-fields described above for their application to framework materials, there have been several recent developments in that area, which we want to highlight here. Deriving a new force-field for a material is a hard and long task, where one needs not only to gather or generate reference data, but also to adapt parameters and check every time that the physical properties predicted by the force-field are right. In the past few years, novel methodologies for force-field fitting have been proposed, relying on machine learning algorithms. They aim to make the process more automatic, more reproducible, and also reduce its reliance on human input. Starting from the structure optimized with ab initio calculations and the Hessian at this energy minimum, a machine-learning procedure (for example, a genetic algorithm coupled with a least-squares minimization) finds the optimal set of parameters matching the structure and the Hessian. Some implementations of this idea are the MOF-FF [23] and QuickFF [24] force-fields—or maybe more accurately, force-field optimization methodologies. While they use slightly different input data and fitting procedures, they share the common goal of parameterizing force-fields in a systematic and consistent fashion, from first-principles reference data.
To give an example of the use of these new force-field methodologies, MOF-FF was recently used to predict the most stable structure and topology for copper paddle-wheel MOFs depending on the linker [25]. The authors generated all the structure by combining simple building blocks (linkers and copper paddle-wheel) with different topologies, and were then able to use the same force-field to optimize and study them all. Finally, we note that these methods were originally developed by relying on reference data gathered on (finite) clusters representative of the MOF structures, and were later extended to periodic input data. The use of periodic structures as a reference was shown to be essential for a correct description of structural, vibrational and thermodynamic properties of soft framework materials like MIL53(Al) by QuickFF [26].
Despite this progress, classical force-fields remain fundamentally limited by the analytical form they choose to represent interactions, even when parameterized in an optimal fashion. For example, a force-field using a Lennard–Jones dispersion potential will be unable to reproduce any long-range interaction that does not follow this functional form. A promising alternative, in order to be able to reproduce any possible interaction profile coming from the reference data (i.e. quantum chemistry calculations), is the use of neural-networks force-field. Neural networks are algorithms that map a set of input values to a set of output values by associating adjustable weights with each value, and then using a nonlinear function (called the activation function) to map the weighted inputs to the outputs. If the outputs are then fed to another neural network, the resulting network is said to have multiple layers—see figure 1 for a graphical representation with three layers. One property of neural networks is their ability to reproduce arbitrary multidimensional $Rn→Rm$ functions with arbitrary accuracy [27]. This makes them very appealing to reproduce energy or forces from ab initio calculations, using only the atomic position as input—effectively functioning like a force-field, without any assumptions on the nature of the interactions. Before being usable, the network must be ‘trained’ with data representative of the system of interest. During this training, the weights are adapted to ensure a correct mapping from the input (the atomic positions) to the output (forces and energy). Using atomic positions in Cartesian coordinates as the input is not optimal, as the generated network will only be usable with the exact same system used for training. An alternative is to rely solely on the local environment of an atom up to a cutoff distance, represented in a translation and rotation-independent manner [28]. Neural network force-fields are a very promising approach to cheap simulation with high accuracy, and they are already used for small organic molecules [29], water [30,31]; as well as classical dense crystalline materials [32], and amorphous inorganic materials [33,34]. They are especially helpful with amorphous materials such as silicon or glasses, where the usual classical potential are complex multi-body potentials. This approach still remains to be extended to porous frameworks materials.
(c) Simulating complex systems
One of the biggest current challenges in the simulation of framework materials lies in the complexity of these systems. The computational cost of our tools imposes limits on the systems we can model, in terms of length scale (and thus number of atoms) and time scales. For crystalline phases, the use of periodic boundary conditions, where the simulated system is repeated in all spatial dimensions, is a very effective way to describe infinite systems within computers with limited memory and CPUs. However, this approach falls short when we want to study phenomena involving large correlation lengths, such as dynamic properties of soft materials. Another difficult area is the computational modelling of disordered phases, where a very large simulation box would be necessary to correctly describe the system. Yet, within the field of framework materials, such disordered systems are attracting a lot of interest due to their properties that differ from their crystalline counterparts. We can here cite as examples systems such as MOF glasses [3537] and liquids [38], or framework materials with defects and correlated disorder [39]. There is thus an important drive to model these materials, because of their properties (e.g. amorphous phases can have more appealing mechanical and optical properties than crystals) or because catalysis, nucleation or adsorption can occur preferentially around defects.
A strategy that can be used in this case—if the brute force approach of using a very large simulation box is not feasible—is to use multiple realizations (or ‘copies’) of the system of interest, and average the measured properties over those replicas. This approach has been extensively used in the past for the study of amorphous systems such as silica glasses or disordered carbons. For example, Van Ginhoven et al. [40] used DFT calculations on 18 different configurations of silica glass created using a classical force-field, and were thus able to obtain a good statistical representation of static and dynamic properties with comparable or better accuracy than a longer, bigger simulation (figure 2).
Another strategy to study large-scale systems is to change the level of description, moving closer to mesoscopic methods and using coarse-grained force-fields instead of atomistic ones. Dürholt et al. [41] have generated such a coarse-grained force-field for the HKUST-1 MOF, based on copper paddle-wheels. These authors showed that even a very coarse model is able to reproduce the low-energy deformations of the system, with only one coarse-grained bead for 30 atoms. Another mesoscopic approach, in the field of adsorption, is the use of Lattice Boltzmann methods to describe the coupling between fluid flow and adsorption in porous media with complex geometries [42].
Beyond this scale, it can also be useful to turn to macroscopic modelling methods to simulate even larger systems. Indeed, many potential applications of framework materials are based on their use not as single crystals, but are expect to construct nanostructured or composite systems: common examples include monoliths, supported thin films and mixed-matrix membranes. In order to describe these composite systems, one has to turn to conventional microscopic modelling methods: finite elements for solid mechanics, computational fluid dynamics to describe transport, etc. In this vein, Evans et al. [43] used a macroscopic description and finite-element methods to compute deformation properties of mixed-matrix membrane and other composite of framework materials and polymers. The use of finite-element methods allowed them to study sizes up to 400 μm, which is five orders of magnitude bigger than typical atomistic simulations.
Finally, we note that while we are starting to see new techniques and methods that go from one level of description to the next (quantum to classical, micro to meso, meso to macro), the bridging of those various scales of simulations into a coherent multi-scale simulation methodology is still a widely open research question. How can one use data from ab initio simulation to fit atomistic classical force-field? [23,44] Or leverage force-field-based data to create a coarse-grained model? [41] Or transfer microscopic properties into input for a finite-element method? [43,45] Every time we go up a level of description, we are able to work with bigger systems at longer timescales, at the cost of some accuracy and precision, but we still lack a systematic way to create and validate these novel models for performance and accuracy.
(d) Describing excited states
We note here that a particularly challenging area of the modelling of framework materials is that of the description of their excited states, in order to better understand, e.g. their optical properties and photocatalytic activity. Such phenomena involve transitions between the system's ground state and another state of higher energy (the excited state) upon photon absorption or emission. The energy difference involved in the electronic transitions is directly related to the position of absorption and emission bands. Theoretical models can give insight into the properties of electronically excited states, and are therefore a useful complement to experimental measurements. In that framework, density functional theory (DFT), and more precisely its time-dependent form (TD-DFT) [46,47], is the ab initio method of choice for most of the cases [48,49], as it may treat structures containing up to ca 300 atoms. To study framework materials, Wilbraham et al. have developed a computational protocol in order to simulate the optical signatures of two MOF structures based on the 4,4-bis((3,5-dimethyl-1H-pyrazol-4-yl)methyl)-biphenyl (H2DMPMB) linker. The developed protocol was successfully applied to characterize and to rationalize the adsorption and the emission behaviour on the interchange of zinc and cadmium as metal cation [48]. Another important optical property in hybrid materials is the nature of the electronic excitations that could present ligand-to-metal charge transfer (LMCT) characteristics. Very recently, Wu et al. showed that from different cations, electronic excitations occur in the linker of the UiO-66(Ce) MOF upon light absorption. These authors showed that incorporation of the cerium cation presents an effective way not only to stabilize the LMCT, but also to increase the photocatalytic activity of UiO-66 MOF [49]. For applications in photocatalysis, the magnitude of the band gap and the absolute positions of the band edges are of high importance [50,51]. As an example, based on the mixing of organic linkers, Ricardo et al. have designed new ZIF materials with a narrower band gap in order to allow the absorption of the visible range solar spectrum. They showed that by introducing a transition metal (copper) in the tetrahedral position of the mixed-linker ZIFs, it is possible to increase photo-adsorption [51].
3. Material databases
As stated in the introduction, the last decade has seen an important increase in the number of studies on various families of framework materials, with the goal of discovering or designing novel materials with targeted properties. Given the large number of materials synthesized, characterized and reported, three important series of questions arise:
• (i) Where and how is the information on these materials stored? What are the available data?
• (ii) Under what form is it stored, how can it be queried, retrieved and interpreted? That is, issues of Application Programming Interface (API), format and interoperability.
• (iii) What is the extent of information and properties provided for each structure? How were they determined? Those are questions about the metadata associated with each structure.
In this section, we will briefly review the current state of the art and describe some of the existing material databases for framework materials, contrasting the situation with that of inorganic materials.
(a) Zeolites
Let us start with the grandparent of this family of databases, namely the database of zeolite structures from the International Zeolite Association (IZA), which is freely available on the Internet at http://www.iza-structure.org/databases/. Most of the information is also available in the printed form, as the Atlas of zeolite framework types book [52]. Zeolites belong to the class of nanoporous materials and are composed of oxygen, silicon and aluminium. They have widespread applications at the industrial level in the fields of catalysis, adsorption and separation [5356]. At the current date, the corresponding database provides structural information for 230 zeolite framework types reported experimentally, among which 67 are natural zeolites. Ten years ago, only 176 zeolite frameworks were known, showing that even among ‘conventional’ porous materials, progress is steady and the synthesis of new zeolites remains a considerable challenge. The IZA database is heavily curated, as all the zeolitic structures it includes have been approved by the Structure Commission of the IZA, to verify that it is unique and that the structure has been satisfactorily proved.
The nomenclature for these materials is recognized by IUPAC (the International Union of Pure and Applied Chemistry) and is assigned by a three letter code—such as FAU, for the faujasite framework, or MOR for the mordenite framework. Data associated with each framework type code include crystallographic data: space group, cell parameters, positions of vertices in the idealized framework, but also topological density, ring size, channel dimensions, maximum diameter of an included sphere, accessible volume and composite-building units. Moreover, going beyond idealized framework structure and topological properties, the database features detailed information for building models, and simulated powder diffraction patterns for representative materials, as well as all corresponding literature references.
At this stage, the reader unfamiliar with zeolites may be surprised that only 230 zeolitic structures have been identified experimentally. Indeed, at the molecular scale, zeolites are constituted of TO4 tetrahedra (where typically T = Al or Si), connected by their corners. It is mathematically possible to create an infinite number of such four-connected nets that have three-dimensional periodicity. The question of why only a few structures are experimentally realized, known as ‘zeolite feasibility’, is still wide open [57,58]. Nevertheless, researchers have used theoretical and computational tools to develop databases of hypothetical zeolitic structures—based on four-connected nets, but usually with added constraints such as an upper bound on the lattice energy or topology. Compared to the experimental zeolites, the number of hypothetical zeolitic structures is much larger and rapidly growing. In the first such database published, by Li et al. [5961] and available at http://mezeopor.jlu.edu.cn/hypo/, two sets of hypothetical zeolite structures are provided [61,62]. The first set is generated by the FraGen algorithm, which is based on Monte Carlo direct space structure modelling [63]. The second set is composed of so-called ABC-6 structures, which are enumerated through a material genome approach [64]. The number of all the ABC-6 structures is 84 292. Besides their structures in CIF format, all hypothetical structures are assembled in an Excel spreadsheet listing their properties, such as stacking layers, stacking sequences, space groups, cell dimensions, channel openings, framework energies, framework densities, stacking compactness and the constituent cages [61].
A second hypothetical zeolite structure database, available at , was generated by Treacy and Foster [65,66]. It contains 5 million different frameworks, triaged into ‘bronze’ and ‘silver’ sets, depending on their feasibility based respectively on a specifically designed cost function and force-field energy minimization. The two sets contain 5 389 408 bronze and 1 270 921 silver structures, and have been used as starting points for a series of theoretical surveys of zeolitic frameworks [67] and related four-connected frameworks [68,69]. Using the Monte Carlo approach, Earl et al. have developed a systematic computational procedure to search through unit cells with different space group symmetries [70], called the symmetry-constrained intersite bonding search (SCIBS) approach. They have used it to generate a third database of 2.6 million zeolite-like materials that have topological, geometrical and diffraction characteristics that are similar to those of known zeolites [71]. All three hypothetical zeolite databases are maintained by individual research groups, and are not open to external submissions of new structures.
Besides the aluminosilicate zeolites, open-framework aluminophosphates, or AlPOs, constitute an important class of microporous inorganic materials with a variety of structures ranging from neutral zeolites to anionic frameworks. The AlPO framework is not only limited to Al and Si as tetrahedral atoms: the upper limit of pore size can go beyond 12-membered rings, and the primary building units are not restricted to tetrahedra. This gives the AlPO family a rich variety of structural architectures and physico-chemical properties. There is an AlPO database, available online at http://mezeopor.jlu.edu.cn/alpo/, developed by Y. Li, J. Yu and R. Xu. It contains over 200 experimental AlPO structures reported in the literature [72]. In addition to general information, such as formula, space group, cell parameters and atomic coordinates, this database also includes more detailed structural information, such as coordination environment, Al/P ratio, stacking sequences for two-dimensional structures and coordination sequences. Simulated XRD reflections and references are also included to aid the identification of samples of users.
(b) Metal-organic frameworks
MOFs appeared almost 30 years ago, and designate a class of materials composed of inorganic nodes linked by organic ligands. These are a novel generation of materials, with promising applications to follow zeolites in catalysis and adsorption-related applications. Since their discovery, the growth in the number of MOF structures reported in the Cambridge Structural Database (CSD) has been staggering, as shown in figure 3. The latter contains more than 900 000 structures of small molecule crystal structures and materials, among which 70 000 MOF materials can be found. Each crystal structure undergoes extensive validation and cross-checking by expert chemists and crystallographers to ensure that the database is maintained to the highest possible standards. Apart from X-ray, neutron diffraction analyses and three-dimensional structure, every entry is enriched with bibliographic, chemical and physical information. Even though all published MOF structures are collected in the CSD, it is not easy to distinguish them from the rest of the structures in the CSD. In this vein, Watanabe et al. have extracted 30 000 extended MOF compounds from the CSD, among which 1163 MOF materials were applied for CO2/N2 separation [74]. In 2013, Goldsmith et al. published an automated approach for screening 20 000 porous structures in the CSD useful for hydrogen storage [75]. This requires the use of algorithms for virtual solvent removal, and relies on an established empirical correlation between excess hydrogen uptake and surface area.
In 2014, Chung et al. developed a curated database of MOF structures, named the ‘Computation-Ready Experimental MOFs’ (CoRE MOF) database; it is available at https://gregchung.github.io/CoRE-MOFs/. It contains over 6000 three-dimensional MOFs, with solvents and templating agents cleaned, and with a pore limiting diameter (PLD) larger than 2.4 Å [76]. The protocol used to generate the database, represented in figure 4, is the following: (i) identify and extract MOF structures from CSD, based on atomic types and bonds present; (ii) remove solvent molecules and included templates; (iii) in some cases, remove disorder. Several recent studies have used this database as a starting point [77,78]. Additional computational data can also be added to the database, as did the Sholl group by computing and publishing point charges derived from periodic DFT calculations for more than 2000 structures in the CoRE MOF database [79]. This allows for easier reuse by other research groups, as a starting point for adsorption calculations of polar molecules, for example.
Despite the importance of these CSD-derived databases, they are not integrated within the CSD, and thus require manual updates over time, as new entries are added to the CSD. To address this deficiency, Moghadam et al. have recently implemented seven criteria for MOFs embedded within a custom CSD Python Application Programming Interface (API) workflow [73]. The constructed CSD MOF is currently integrated into the CCDC's (Cambridge Crystallographic Data Centre) structure search program ConQuest, which allows for tailored structural queries and visualization. CSD MOF thus presents the most complete collection of MOFs, and will stay synchronized with the CSD as time goes by. The authors have also developed an array of computational algorithms in order to remove the solvent molecules from the CSD MOF subset, and then to calculate the geometric and physical properties for all the structures in the database.
Finally, we should note here that some effort has also been devoted to designing hypothetical MOFs structures. In this quest, Wilmer et al. have generated a database of 137 953 hypothetical MOF structures from 102 different building blocks, containing secondary building units (SBU) and organic linkers [80]. The authors then used this database as a starting point for computational screening, with the goal of identifying the best candidates for specific applications. This was applied, for example, to the cases of hydrogen storage, methane storage and adsorption/stability of water [8183]. However, once a computational screening approach has identified possible targets, the design of synthesis protocol for these hypothetical materials, as well as their feasibility, is still often a complex issue.
(c) The Materials Project
For other crystalline compounds, and for inorganic solids in particular, there have been a large number of databases, often with a specific focus on a particular class of materials. Most—but not all—are dedicated to experimental structures and properties. They are briefly reviewed in [84], for the interested reader. We want to focus here on a recent development, the development of the Materials Project, which provides a material database as well as an open API (and web portal) to computed information on known and predicted materials. As we are writing this, it includes information about 86 371 inorganic compounds, and it is regularly updated with additional entries. It also aggregates nanoporous structures from several databases, including CoRE MOFs, hypothetical MOFs and zeolites described in the previous sections, as well as computational predicted porous polymeric networks (PPNs). The main goal of this database is to accelerate advanced material discovery and deployment [85]. Classes of materials that feature a specific focus include battery materials, intercalation electrodes and conversion electrodes.
The database is open—after registration—and accessible through its own open-source API. A high-quality reference implementation of this API is provided as part of the open-source Python Materials Genomics (pymatgen) material analysis package, available at http://pymatgen.org/. In addition to the Materials Project API, pymatgen is a generic material-oriented Python library, with classes for the representation of elements, sites, molecules and periodic structures, input/output support for several common file formats, analysis tools for electronic structure and physical properties, etc. For non-programmers, the Materials Project also includes a web front-end at https://materialsproject.org/, through which one can access the large dataset. Properties, such as space group, X-ray diffraction, band structures and elastic properties, can be browsed or searched. This architecture is extensible; for example, our group has recently provided an integration of the online ELATE application for the analysis and visualization of elastic tensors [86]. This ELATE analysis and visualization is linked from every Materials Project entry that contains elastic data, i.e. every crystalline solid for which the elastic stiffness tensor has been computed by DFT calculations. In 2015, Jong et al. reported elastic properties for 1181 inorganic compounds [87]. This number has since grown, and the database currently contains elastic information for 13 934 inorganic compounds—and this number is still growing.
4. Machine learning for property prediction
While the databases of structures, both experimentally determined and hypothetical, grow at a fast pace, the efforts to add physical and chemical properties of these materials in databases are happening on a longer timescale. The current theoretical chemistry methods, using microscopic (quantum chemistry and classical molecular modelling) and mesoscopic scales, make it possible to predict and understand the physical and chemical behaviour of given materials that already exist. However, these methods are computationally intensive, and their use on a very large scale is somewhat limited. Computational screening studies based on existing databases, as we have described below, are often limited to very simple descriptors of a material's performance for a given application. They are often used in a multi-stage strategy, where filters of increasing complexity and computational cost are applied successfully. For example, in the case of adsorption, studies will focus first on pore space and accessible area (geometric descriptors), then identify among those best-performing candidates the ones suitable for adsorption based on Grand Canonical Monte Carlo simulations. A similar strategy was applied by Davies et al. for the screening of stoichiometric inorganic materials for water splitting, where low-computational cost filters based on electronegativity, electronic chemical potential and atomic solid-state energy [88].
To go beyond these methods and identify novel materials for targeted applications, there is thus a need to develop active methods for property prediction based on structure and chemical composition, bypassing quantum calculations and classical molecular simulations—at least during an initial high-throughput screening step. In order to develop such methods, databases are useful in two different ways: first, databases of physical and chemical properties are necessary in order to train, benchmark and validate the new prediction methods. Second, larger databases of hypothetical structures are needed as a basis for large-scale screening, once the property prediction methods are adequate.
With this goal in mind, machine learning appears as a powerful tool for predicting chemical and physical properties for large number of materials, i.e. at low computational cost. Neural networks—already presented in §2—are a class of machine learning algorithms, but many others exist. Machine learning is the generic term used for algorithms that generate another algorithm, in order to progressively improve their performance for a task they have not been explicitly programmed to perform. In the most commonly used family of machine learning methods, called supervised learning, the algorithm generated is called the predictor. It takes a set of input descriptors, and maps them to the required output. This output is usually the numeric value of a physical property in our case, but it can also be the classification of the input in a given class. When using machine learning on chemical systems, the descriptors can take multiple forms: local descriptors such as atomic positions, bond length, angle or dihedral angles; global descriptors like mass density, largest included sphere in a porous framework or elastic properties; and topological descriptors such as ring size distribution. As we said, machine learning algorithms generate predictor algorithms from a set of reference input and output data. The idea is to train the machine learning algorithm on a subset of the data, and then test the generated predictor on the remaining part of the reference data. This allows us to evaluate the accuracy of the predictor. For more information on machine learning and its usage in molecular and materials science, we refer the interested reader to the very pedagogical review by Butler et al. [89].
Within the fields of physics and chemistry, machine learning has been applied to a large diversity of applications. On the computational side, research is ongoing on the use of machine learning to improve electronic structure calculations by bypassing the Kohn–Sham equations [90], developing machine-learned functionals [91] and creating adaptive basis sets [92]. Other applications in chemistry include the extraction of chemical data (structures, reactions, etc.) from published work [93], the prediction of novel synthetic pathways [94], the design of catalysts [95], etc. In 2016, Jong et al. used machine learning techniques to predict elastic properties (bulk and shear modulus) for inorganic compounds in order to accelerate material discovery and design [96]. However, few studies have focused so far on framework materials and their physical properties. Recently, Evans et al. [97] used a machine learning algorithm to predict elastic properties (such as the bulk modulus and shear modulus) of 590 448 hypothetical pure-silica zeolites, using an accurate training set of elastic properties determined with DFT calculations [98]. Evans combined the GBR (gradient boosting regressor) approach using regression trees and a set of local, structural and porosity-related descriptors, and their results highlighted several important correlations and trends in terms of stability for zeolitic structures. Romain Gaillac extended this to predict the auxeticity and the Poisson's ratio of more than 1000 zeolites [99]. These recent advances, combined with the availability of DFT-computed elastic tensors for a large number of inorganic materials within the Materials Project, create new opportunities for computationally assisted material discovery and design. We should also note here, for the sake of completeness, that unsupervised machine learning has also been applied to chemical questions: such techniques take a dataset as input and identify hidden structures in the data—e.g. clustering of data points or structures by similarity [100,101].
5. Conclusion
We have given here a short overview of the current state of methodologies for modelling framework materials at multiple scales and tried to highlight some of the common themes as well as differences between this rapidly expanding class of materials and other inorganic solids. It is clear from the examples listed that the diversity of modelling methods is also growing to match the rapid pace of experimental developments, and the increasing complexity of the systems and phenomena studied. However, while modelling strategies develop at all length and time scales, from the microscopic to the macroscopic, the links between these simulation scales are still rather ad hoc, and comprehensive, coherent multi-scale simulation strategies are still the exception, rather than the norm. Just as experimental and computational tools are complementary in providing a large variety of viewpoints on a given material, studies containing multiple simulations strategies at different scales are appearing, which provide a very deep understanding of the macroscopic properties of a material and its microscopic origins.
Data accessibility
Supporting data are available online in our data repository at https://github.com/fxcoudert/citable-data.
Competing interests
We declare we have no competing interests.
Acknowledgments
A large part of the work reviewed here requires access of scientists in the field to large supercomputer centres. Although no original calculations were performed in the writing of this review, we acknowledge GENCI for high-performance computing CPU time allocations (grant no. A0050807069). We sincerely thank colleagues from Université de France (UNIV France) for their support. We thank Cory Simon and Martijn Zwijnenburg for their insightful feedback on a first version of this manuscript which appeared on the chemRxiv preprint server.
Footnotes
One contribution of 10 to a theme issue ‘Mineralomimesis: natural and synthetic frameworks in science and technology’.
References
• 1.
Horike S, Shimomura S, Kitagawa S. 2009Soft porous crystals. Nat. Chem. 1, 695–704. (doi:10.1038/nchem.444)
• 2.
Coudert FX. 2015Responsive metal–organic frameworks and framework materials: under pressure, taking the heat, in the spotlight, with friends. Chem. Mater. 27, 1905–1916. (doi:10.1021/acs.chemmater.5b00046)
• 3.
Krause Set al.2016A pressure-amplifying framework material with negative gas adsorption transitions. Nature 532, 348–352. (doi:10.1038/nature17430)
• 4.
Cairns AB, Goodwin AL. 2015Negative linear compressibility. Phys. Chem. Chem. Phys. 17, 20 449–20 465. (doi:10.1039/C5CP00442J)
• 5.
Spencer EC, Kiran MSRN, Li W, Ramamurty U, Ross NL, Cheetham AK. 2014Pressure-induced bond rearrangement and reversible phase transformation in a metal-organic framework. Angew. Chem. Int. Ed. 53, 5583–5586. (doi:10.1002/anie.v53.22)
• 6.
Lyndon R, Konstas K, Ladewig BP, Southon PD, Kepert PCJ, Hill MR. 2013Dynamic photo-switching in metal-organic frameworks as a route to low-energy carbon dioxide capture and release. Angew. Chem. Int. Ed. 52, 3695–3698. (doi:10.1002/anie.201206359)
• 7.
Lapidus SH, Halder GJ, Chupas PJ, Chapman KW. 2013Exploiting high pressures to generate porosity, polymorphism, and lattice expansion in the nonporous molecular framework zn(cn). J. Am. Chem. Soc. 135, 7621–7628. (doi:10.1021/ja4012707)
• 8.
Cornell WDet al.1995A second generation force field for the simulation of proteins, nucleic acids, and organic molecules. J. Am. Chem. Soc. 117, 5179–5197. (doi:10.1021/ja00124a002)
• 9.
Rappe AK, Casewit CJ, Colwell KS, Goddard WA, Skiff WM. 1992Uff, a full periodic table force field for molecular mechanics and molecular dynamics simulations. J. Am. Chem. Soc. 114, 10 024–10 035. (doi:10.1021/ja00051a040)
• 10.
Greathouse JA, Ockwig NW, Criscenti LJ, Guilinger TR, Pohl P, Allendorf MD. 2010Computational screening of metal–organic frameworks for large-molecule chemical sensing. Phys. Chem. Chem. Phys. 12, 12621. (doi:10.1039/c0cp00092b)
• 11.
Ryan P, Farha OK, Broadbelt LJ, Snurr RQ. 2010Computational screening of metal-organic frameworks for xenon/krypton separation. AIChE J. 57, 1759–1766. (doi:10.1002/aic.v57.7)
• 12.
Colón YJ, Snurr RQ. 2014High-throughput computational screening of metal–organic frameworks. Chem. Soc. Rev. 43, 5735–5749. (doi:10.1039/C4CS00070F)
• 13.
Chaplais G, Fraux G, Paillaud JL, Marichal C, Nouali H, Fuchs AH, Coudert FX, Patarin J. 2018Impacts of the imidazolate linker substitution (CH3, cl or br) on the structural and adsorptive properties of ZIF-8. J. Phys. Chem. C 122, 26 945–26 955. (doi:10.1021/acs.jpcc.8b08706)
• 14.
Howe JD, Morelock CR, Jiao Y, Chapman KW, Walton KS, Sholl DS. 2016Understanding structure, metal distribution, and water adsorption in mixed-metal MOF-74. J. Phys. Chem. C 121, 627–635. (doi:10.1021/acs.jpcc.6b11719)
• 15.
McGrath MJ, Siepmann JI, Kuo IFW, Mundy CJ, VandeVondele J, Hutter J, Mohamed F, Krack M. 2005Isobaric-isothermal monte carlo simulations from first principles: application to liquid water at ambient conditions. ChemPhysChem 6, 1894–1901. (doi:10.1002/(ISSN)1439-7641)
• 16.
Leiding J, Coe JD. 2014An efficient approach to ab initio monte carlo simulation. J. Chem. Phys. 140, 034106. (doi:10.1063/1.4855755)
• 17.
Falcioni M, Deem MW. 1999A biased monte carlo scheme for zeolite structure solution. J. Chem. Phys. 110, 1754–1766. (doi:10.1063/1.477812)
• 18.
Maurin G, Senet P, Devautour S, Gaveau P, Henn F, Van Doren VE, Giuntini JC. 2001Combining the Monte Carlo technique with29si nmr spectroscopy: simulations of cation locations in zeolites with various si/al ratios. J. Phys. Chem. B 105, 9157–9161. (doi:10.1021/jp011789i)
• 19.
Coudert FX, Fuchs AH. 2016Computational characterization and prediction of metal–organic framework properties. Coord. Chem. Rev. 307, 211–236. (doi:10.1016/j.ccr.2015.08.001)
• 20.
Paddison JAM, Agrestini S, Lees MR, Fleck CL, Deen PP, Goodwin AL, Stewart JR, Petrenko OA. 2014Spin correlations in Ca3Co2O6: polarized-neutron diffraction and Monte Carlo study. Phys. Rev. B 90, 31. (doi:10.1103/PhysRevB.90.014411)
• 21.
Sapnik AF, Geddes HS, Reynolds EM, Yeung HHM, Goodwin AL. 2018Compositional inhomogeneity and tuneable thermal expansion in mixed-metal zif-8 analogues. Chem. Commun. 54, 9651–9654. (doi:10.1039/C8CC04172E)
• 22.
Cairns AB, Cliffe MJ, Paddison JAM, Daisenberger D, Tucker MG, Coudert FX, Goodwin AL. 2016Encoding complexity within supramolecular analogues of frustrated magnets. Nat. Chem. 8, 442–447. (doi:10.1038/nchem.2462)
• 23.
Bureekaew S, Amirjalayer S, Tafipolsky M, Spickermann C, Roy TK, Schmid R. 2013MOF-FF - a flexible first-principles derived force field for metal-organic frameworks. Phys. Status Solidi B 250, 1128–1141. (doi:10.1002/pssb.v250.6)
• 24.
Vanduyfhuys L, Vandenbrande S, Verstraelen T, Schmid R, Waroquier M, Speybroeck VV. 2015QuickFF: a program for a quick and easy derivation of force fields for metal-organic frameworks fromab initioinput. J. Comput. Chem. 36, 1015–1027. (doi:10.1002/jcc.v36.13)
• 25.
Impeng S, Cedeno R, Dürholt JP, Schmid R, Bureekaew S. 2018Computational structure prediction of (4, 4)-connected copper paddle-wheel-based MOFs: influence of ligand functionalization on the topological preference. Cryst. Growth Des. 18, 2699–2706. (doi:10.1021/acs.cgd.8b00238)
• 26.
Vanduyfhuys L, Vandenbrande S, Wieme J, Waroquier M, Verstraelen T, Van Speybroeck V. 2018Extension of the quickff force field protocol for an improved accuracy of structural, vibrational, mechanical and thermal properties of metal-organic frameworks. J. Comput. Chem. 39, 999–1011. (doi:10.1002/jcc.v39.16)
• 27.
Hornik K, Stinchcombe M, White H. 1989Multilayer feedforward networks are universal approximators. Neural Netw. 2, 359–366. (doi:10.1016/0893-6080(89)90020-8)
• 28.
Behler J, Parrinello M. 2007Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett. 98, 146401. (doi:10.1103/PhysRevLett.98.146401)
• 29.
Smith JS, Isayev O, Roitberg AE. 2017ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost. Chem. Sci. 8, 3192–3203. (doi:10.1039/C6SC05720A)
• 30.
Cho KH, No KT, Scheraga HA. 2002A polarizable force field for water using an artificial neural network. J. Mol. Struct. 641, 77–91. (doi:10.1016/S0022-2860(02)00299-5)
• 31.
Morawietz T, Behler J. 2013A density-functional theory-based neural network potential for water clusters including van der waals corrections. J. Phys. Chem. A 117, 7356–7366. (doi:10.1021/jp401225b)
• 32.
Hellström M, Behler J. 2018Neural network potentials in materials modeling. In Handbook of materials modeling, pp. 1–20. Berlin: Springer International Publishing. Google Scholar
• 33.
Deringer VL, Csányi G. 2017Machine learning based interatomic potential for amorphous carbon. Phys. Rev. B 95, 094203. (doi:10.1103/PhysRevB.95.094203)
• 34.
Deringer VL, Bernstein N, Bartók AP, Cliffe MJ, Kerber RN, Marbella LE, Grey CP, Elliott SR, Csányi G. 2018Realistic atomistic structure of amorphous silicon from machine-learning-driven molecular dynamics. J. Phys. Chem. Lett. 9, 2879–2885. (doi:10.1021/acs.jpclett.8b00902)
• 35.
Bennett TDet al.2016Melt-quenched glasses of metal–organic frameworks. J. Am. Chem. Soc. 138, 3484–3492. (doi:10.1021/jacs.5b13220)
• 36.
Zhao Y, Lee SY, Becknell N, Yaghi OM, Angell CA. 2016Nanoporous transparent mof glasses with accessible internal surface. J. Am. Chem. Soc. 138, 10 818–10 821. (doi:10.1021/jacs.6b07078)
• 37.
Zhou Cet al.2018Metal-organic framework glasses with permanent accessible porosity. Nat. Commun. 9, 19. (doi:10.1038/s41586-018-0513-4)
• 38.
Gaillac R, Pullumbi P, Beyer KA, Chapman KW, Keen DA, Bennett TD, Coudert FX. 2017Liquid metal–organic frameworks. Nat. Mater. 16, 1149–1154. (doi:10.1038/nmat4998)
• 39.
Cheetham AK, Bennett TD, Coudert FX, Goodwin AL. 2016Defects and disorder in metal organic frameworks. Dalton Trans. 45, 4113–4126. (doi:10.1039/C5DT04392A)
• 40.
Ginhoven RMV, Jónsson H, Corrales LR. 2005Silica glass structure generation forab initiocalculations using small samples of amorphous silica. Phys. Rev. B 71, 024208. (doi:10.1103/PhysRevB.71.024208)
• 41.
Dürholt JP, Galvelis R, Schmid R. 2016Coarse graining of force fields for metal–organic frameworks. Dalton Trans. 45, 4370–4379. (doi:10.1039/C5DT03865K)
• 42.
Vanson JM, Coudert FX, Rotenberg B, Levesque M, Tardivat C, Klotz M, Boutin A. 2015Unexpected coupling between flow and adsorption in porous media. Soft Matter 11, 6125–6133. (doi:10.1039/C5SM01348H)
• 43.
Evans JD, Coudert FX. 2017Macroscopic simulation of deformation in soft microporous composites. J. Phys. Chem. Lett. 8, 1578–1584. (doi:10.1021/acs.jpclett.7b00397)
• 44.
Vanduyfhuys L, Verstraelen T, Vandichel M, Waroquier M, Van Speybroeck V. 2012Ab initio parametrized force field for the flexible metal–organic framework mil-53(al). J. Chem. Theory Comput. 8, 3217–3231. (doi:10.1021/ct300172m)
• 45.
Coudert FX, Fuchs AH, Neimark AV. 2016Adsorption deformation of microporous composites. Dalton Trans. 45, 4136–4140. (doi:10.1039/C5DT03978A)
• 46.
Runge E, Gross EKU. 1984Density-functional theory for time-dependent systems. Phys. Rev. Lett. 52, 997–1000. (doi:10.1103/PhysRevLett.52.997)
• 47.
Casida ME. 1995Time-dependent density functional response theory for molecules. In Recent advances in density functional methods, pp. 155–192. World Scientific. Google Scholar
• 48.
Wilbraham L, Coudert FX, Ciofini I. 2016Modelling photophysical properties of metal-organic frameworks: a density functional theory based approach. Phys. Chem. Chem. Phys. 18, 25 176–25 182. (doi:10.1039/C6CP04056J)
• 49.
Wu XP, Gagliardi L, Truhlar DG. 2018Cerium metal–organic framework for photocatalysis. J. Am. Chem. Soc. 140, 7904–7912. (doi:10.1021/jacs.8b03613)
• 50.
Butler KT, Hendon CH, Walsh A. 2014Electronic chemical potentials of porous metal–organic frameworks. J. Am. Chem. Soc. 136, 2703–2706. (doi:10.1021/ja4110073)
• 51.
Grau-Crespo R, Aziz A, Collins AW, Crespo-Otero R, Hernández NC, Rodriguez-Albelo LM, Ruiz-Salvador AR, Calero S, Hamad S. 2016Modelling a linker mix-and-match approach for controlling the optical excitation gaps and band alignment of zeolitic imidazolate frameworks. Angew. Chem. 128, 16 246–16 250. (doi:10.1002/ange.201609439)
• 52.
Baerlocher C, McCusker LB, Olson D. 2007Atlas of zeolite framework types, 6th edn. Amsterdam, The Netherland: Elsevier. Google Scholar
• 53.
Speybroeck VV, Hemelsoet K, Joos L, Waroquier M, Bell RG, Catlow CRA. 2015Advances in theory and their application within the field of zeolite chemistry. Chem. Soc. Rev. 44, 7044–7111. (doi:10.1039/C5CS00029G)
• 54.
Chapman KW, Chupas PJ, Nenoff TM. 2010Radioactive iodine capture in silver-containing mordenites through nanoscale silver iodide formation. J. Am. Chem. Soc. 132, 8897–8899. (doi:10.1021/ja103110y)
• 55.
Bučko T, Benco L, Hafner J. Ángyá JG. 2011Monomolecular cracking of propane over acidic chabazite: an ab initio molecular dynamics and transition path sampling study. J. Catal. 279, 220–228. (doi:10.1016/j.jcat.2011.01.022)
• 56.
Nenoff TM, Rodriguez MA, Soelberg NR, Chapman KW. 2014Silver-mordenite for radiologic gas capture from complex streams: dual catalytic CH3i decomposition and i confinement. Micropor. Mesopor. Mat. 200, 297–303. (doi:10.1016/j.micromeso.2014.04.041)
• 57.
Bushuev YG, Sastre G. 2010Feasibility of pure silica zeolites. J. Phys. Chem. C 114, 19 157–19 168. (doi:10.1021/jp107296e)
• 58.
Blatov VA, Ilyushin GD, Proserpio DM. 2013The zeolite conundrum: why are there so many hypothetical zeolites and so few observed? A possible answer from the zeolite-type frameworks perceived as packings of tiles. Chem. Mater. 25, 412–424. (doi:10.1021/cm303528u)
• 59.
Li Y, Yu J, Liu D, Yan W, Xu R, Xu Y. 2003Design of zeolite frameworks with defined pore geometry through constrained assembly of atoms. Chem. Mater. 15, 2780–2785. (doi:10.1021/cm0213826)
• 60.
Yu J, Xu R. 2010Rational approaches toward the design and synthesis of zeolitic inorganic open-framework materials. Acc. Chem. Res. 43, 1195–1204. (doi:10.1021/ar900293m)
• 61.
Li Y, Yu J. 2014New stories of zeolite structures: their descriptions, determinations, predictions, and evaluations. Chem. Rev. 114, 7268–7316. (doi:10.1021/cr500010r)
• 62.
Li J, Corma A, Yu J. 2015Synthesis of new zeolite structures. Chem. Soc. Rev. 44, 7112–7127. (doi:10.1039/C5CS00023H)
• 63.
Li Y, Yu J, Xu R. 2012FraGen: a computer program for real-space structure solution of extended inorganic frameworks. J. Appl. Cryst. 45, 855–861. (doi:10.1107/S002188981201878X)
• 64.
Li Y, Li X, Liu J, Duan F, Yu J. 2015In silico prediction and screening of modular crystal structures via a high-throughput genomic approach. Nature Commun. 6, 011002. (doi:10.1038/ncomms9328)
• 65.
Treacy MMJ, Randall KH, Rao S, Perry JA, Chadi DJ. 1997Enumeration of periodic tetrahedral frameworks. Z. Kristallogr. Cryst. Mater. 212, 50. (doi:10.1524/zkri.1997.212.11.768)
• 66.
Treacy M, Rivin I, Balkovsky E, Randall K, Foster M. 2004Enumeration of periodic tetrahedral frameworks. ii. polynodal graphs. Micropor. Mesopor. Mater. 74, 121–132. (doi:10.1016/j.micromeso.2004.06.013)
• 67.
Zwijnenburg MA, Bell RG. 2008Absence of limitations on the framework density and pore size of high-silica zeolites. Chem. Mater. 20, 3008–3014. (doi:10.1021/cm702175q)
• 68.
Zwijnenburg MA, Jelfs KE, Bromley ST. 2010An extensive theoretical survey of low-density allotropy in silicon. Phys. Chem. Chem. Phys. 12, 8505. (doi:10.1039/c004375c)
• 69.
Zwijnenburg MA, Illas F, Bromley ST. 2010Apparent scarcity of low-density polymorphs of inorganic solids. Phys. Rev. Lett. 104, 768. (doi:10.1103/PhysRevLett.104.175503)
• 70.
Earl DJ, Deem MW. 2006Toward a database of hypothetical zeolite structures. Ind. Eng. Chem. Res. 45, 5449–5454. (doi:10.1021/ie0510728)
• 71.
Pophale R, Cheeseman PA, Deem MW. 2011A database of new zeolite-like materials. Phys. Chem. Chem. Phys. 13, 12407. (doi:10.1039/c0cp02255a)
• 72.
Yu J, Xu R. 2006Insight into the construction of open-framework aluminophosphates. Chem. Soc. Rev. 35, 593. (doi:10.1039/b505856m)
• 73.
Moghadam PZ, Li A, Wiggin SB, Tao A, Maloney AGP, Wood PA, Ward SC, Fairen-Jimenez D. 2017Development of a Cambridge Structural Database subset: a collection of metal–organic frameworks for past, present, and future. Chem. Mater. 29, 2618–2625. (doi:10.1021/acs.chemmater.7b00441)
• 74.
Watanabe T, Sholl DS. 2012Accelerating applications of metal–organic frameworks for gas adsorption and separation by computational screening of materials. Langmuir 28, 14 114–14 128. (doi:10.1021/la301915s)
• 75.
Goldsmith J, Wong-Foy AG, Cafarella MJ, Siegel DJ. 2013Theoretical limits of hydrogen storage in metal–organic frameworks: opportunities and trade-offs. Chem. Mater. 25, 3373–3382. (doi:10.1021/cm401978e)
• 76.
Chung YGet al.2014Computation-ready, experimental metal–organic frameworks: a tool to enable high-throughput screening of nanoporous crystals. Chem. Mater. 26, 6185–6192. (doi:10.1021/cm502594j)
• 77.
Barthel S, Alexandrov EV, Proserpio DM, Smit B. 2018Distinguishing metal–organic frameworks. Cryst. Growth Des. 18, 1738–1747. (doi:10.1021/acs.cgd.7b01663)
• 78.
Park S, Kim B, Choi S, Boyd PG, Smit B, Kim J. 2018Text mining metal–organic framework papers. J. Chem. Inf. Model. 58, 244–251. (doi:10.1021/acs.jcim.7b00608)
• 79.
Nazarian D, Camp JS, Sholl DS. 2016A comprehensive set of high-quality point charges for simulations of metal–organic frameworks. Chem. Mater. 28, 785–793. (doi:10.1021/acs.chemmater.5b03836)
• 80.
Wilmer CE, Leaf M, Lee CY, Farha OK, Hauser BG, Hupp JT, Snurr RQ. 2011Large-scale screening of hypothetical metal–organic frameworks. Nat. Chem. 4, 83–89. (doi:10.1038/nchem.1192)
• 81.
Gomez DA, Toda J, Sastre G. 2014Screening of hypothetical metal–organic frameworks for h2 storage. Phys. Chem. Chem. Phys. 16, 19 001–19 010. (doi:10.1039/C4CP01848F)
• 82.
He Y, Zhou W, Qian G, Chen B. 2014Methane storage in metal–organic frameworks. Chem. Soc. Rev. 43, 5657–5678. (doi:10.1039/C4CS00032C)
• 83.
Burtch NC, Jasuja H, Walton KS. 2014Water stability and adsorption in metal–organic frameworks. Chem. Rev. 114, 10 575–10 612. (doi:10.1021/cr5002589)
• 84.
Zakutayev A, Wunder N, Schwarting M, Perkins JD, White R, Munch K, Tumas W, Phillips C. 2018An open experimental database for exploring inorganic materials. Sci. Data 5, 180053. (doi:10.1038/sdata.2018.53)
• 85.
Jain Aet al.2013Commentary: the materials project: a materials genome approach to accelerating materials innovation. APL Mater. 1, 011002. (doi:10.1063/1.4812323)
• 86.
Gaillac R, Pullumbi P, Coudert FX. 2016Elate: an open-source online application for analysis and visualization of elastic tensors. J. Phys. Condens. Matter 28, 275201. (doi:10.1088/0953-8984/28/27/275201)
• 87.
de Jong Met al.2015Charting the complete elastic properties of inorganic crystalline compounds. Sci. Data 2, 150009. (doi:10.1038/sdata.2015.9)
• 88.
Davies DW, Butler KT, Jackson AJ, Morris A, Frost JM, Skelton JM, Walsh A. 2016Computational screening of all stoichiometric inorganic materials. Chemistry 1, 617–627. (doi:10.1016/j.chempr.2016.09.010)
• 89.
Butler KT, Davies DW, Cartwright H, Isayev O, Walsh A. 2018Machine learning for molecular and materials science. Nature 559, 547–555. (doi:10.1038/s41586-018-0337-2)
• 90.
Brockherde F, Vogt L, Li L, Tuckerman ME, Burke K, Müller KR. 2017Bypassing the kohn-sham equations with machine learning. Nat. Commun. 8, A1133. (doi:10.1038/s41467-017-00839-3)
• 91.
Hollingsworth J, Baker TE, Burke K. 2018Can exact conditions improve machine-learned density functionals?J. Chem. Phys. 148, 241743. (doi:10.1063/1.5025668)
• 92.
Schütt O, VandeVondele J. 2018Machine learning adaptive basis sets for efficient large scale density functional theory simulation. J. Chem. Theory Comput. 14, 4168–4175. (doi:10.1021/acs.jctc.8b00378)
• 93.
Kim E, Huang K, Saunders A, McCallum A, Ceder G, Olivetti E. 2017Materials synthesis insights from scientific literature via text extraction and machine learning. Chem. Mater. 29, 9436–9444. (doi:10.1021/acs.chemmater.7b03500)
• 94.
Gómez-Bombarelli Ret al.2018Automatic chemical design using a data-driven continuous representation of molecules. ACS Cent. Sci. 4, 268–276. (doi:10.1021/acscentsci.7b00572)
• 95.
Goldsmith BR, Esterhuizen J, Liu JX, Bartel CJ, Sutton C. 2018Machine learning for heterogeneous catalyst design and discovery. AIChE J. 64, 2311–2323. (doi:10.1002/aic.v64.7)
• 96.
de Jong M, Chen W, Notestine R, Persson K, Ceder G, Jain A, Asta M, Gamst A. 2016A statistical learning framework for materials science: application to elastic moduli of k-nary inorganic polycrystalline compounds. Sci. Rep. 6, 34256. (doi:10.1038/srep34256)
• 97.
Evans JD, Coudert FX. 2017Predicting the mechanical properties of zeolite frameworks by machine learning. Chem. Mater. 29, 7833–7839. (doi:10.1021/acs.chemmater.7b02532)
• 98.
Coudert FX. 2013Systematic investigation of the mechanical properties of pure silica zeolites: stiffness, anisotropy, and negative linear compressibility. Phys. Chem. Chem. Phys. 15, 16012. (doi:10.1039/c3cp51817e)
• 99.
Gaillac R. 2018Molecular modeling of physico-chemical properties in microporous solids. PhD thesis, Chimie ParisTech, PSL University. See https://tel.archives-ouvertes.fr/tel-01820463. Google Scholar
• 100.
Gómez-Bombarelli Ret al.2018Automatic chemical design using a data-driven continuous representation of molecules. ACS Cent. Sci. 4, 268–276. (doi:10.1021/acscentsci.7b00572)
• 101.
Sturluson A, Huynh MT, York AHP, Simon CM. 2018Eigencages: learning a latent space of porous cage molecules. ACS Cent. Sci. 4, 1663–1676. (doi:10.1021/acscentsci.8b00638)
|
{}
|
# Question
The following information describes a company’s usage of direct labor in a recent period. Compute the direct labor rate and efficiency variances for the period.
Actual direct labor hours used . . . . . . . . . . . . . . . . . . . . . . 65,000
Actual direct labor rate per hour . . . . . . . . . . . . . . . . . . . . \$ 15
Standard direct labor rate per hour . . . . . . . . . . . . . . . . . . \$ 14
Standard direct labor hours for units produced . . . . . . . . . 67,000
Sales1
Views126
|
{}
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > PROPOSITIONAL PROOF SYSTEMS:
Reports tagged with propositional proof systems:
TR94-015 | 12th December 1994
Miklos Ajtai
#### Symmetric Systems of Linear Equations modulo $p$
Suppose that $p$ is a prime number $A$ is a finite set
with $n$ elements
and for each sequence $a=<a_{1},...,a_{k}>$ of length $k$ from the
elements of
$A$, $x_{a}$ is a variable. (We may think that $k$ and $p$ are fixed an
$n$ is sufficiently large.) We will ... more >>>
TR97-026 | 18th June 1997
Jochen Me\3ner, Jacobo Toran
#### Optimal proof systems for Propositional Logic and complete sets
A polynomial time computable function $h:\Sigma^*\to\Sigma^*$ whose range
is the set of tautologies in Propositional Logic (TAUT), is called
a proof system. Cook and Reckhow defined this concept
and in order to compare the relative strenth of different proof systems,
they considered the notion ... more >>>
TR98-021 | 7th April 1998
Shai Ben-David, Anna Gringauze.
#### On the Existence of Propositional Proof Systems and Oracle-relativized Propositional Logic.
Revisions: 1
We investigate sufficient conditions for the existence of
optimal propositional proof systems (PPS).
We concentrate on conditions of the form CoNF = NF.
We introduce a purely combinatorial property of complexity classes
- the notions of {\em slim} vs. {\em fat} classes.
These notions partition the ... more >>>
TR03-011 | 17th February 2003
Christian Glaßer, Alan L. Selman, Samik Sengupta, Liyu Zhang
#### Disjoint NP-Pairs
We study the question of whether the class DisNP of
disjoint pairs (A, B) of NP-sets contains a complete pair.
The question relates to the question of whether optimal
proof systems exist, and we relate it to the previously
studied question of whether there exists ... more >>>
TR04-082 | 9th September 2004
Olaf Beyersdorff
#### Representable Disjoint NP-Pairs
Revisions: 1
We investigate the class of disjoint NP-pairs under different reductions.
The structure of this class is intimately linked to the simulation order
of propositional proof systems, and we make use of the relationship between
propositional proof systems and theories of bounded arithmetic as the main
tool of our analysis.
more >>>
TR04-106 | 19th November 2004
Christian Glaßer, Alan L. Selman, Liyu Zhang
#### Canonical Disjoint NP-Pairs of Propositional Proof Systems
We prove that every disjoint NP-pair is polynomial-time, many-one equivalent to
the canonical disjoint NP-pair of some propositional proof system. Therefore, the degree structure of the class of disjoint NP-pairs and of all canonical pairs is
identical. Secondly, we show that this degree structure is not superficial: Assuming there exist ... more >>>
TR05-077 | 15th July 2005
Zenon Sadowski
#### On a D-N-optimal acceptor for TAUT
The notion of an optimal acceptor for TAUT (the optimality
property is stated only for input strings from TAUT) comes from the line
of research aimed at resolving the question of whether optimal
propositional proof systems exist. In this paper we introduce two new
types of optimal acceptors, a D-N-optimal ... more >>>
TR05-083 | 24th July 2005
Olaf Beyersdorff
#### Disjoint NP-Pairs from Propositional Proof Systems
For a proof system P we introduce the complexity class DNPP(P)
of all disjoint NP-pairs for which the disjointness of the pair is
efficiently provable in the proof system P.
We exhibit structural properties of proof systems which make the
previously defined canonical NP-pairs of these proof systems hard ... more >>>
TR06-142 | 26th October 2006
Olaf Beyersdorff
#### On the Deduction Theorem and Complete Disjoint NP-Pairs
In this paper we ask the question whether the extended Frege proof
system EF satisfies a weak version of the deduction theorem. We
prove that if this is the case, then complete disjoint NP-pairs
exist. On the other hand, if EF is an optimal proof system, ... more >>>
TR07-018 | 1st March 2007
Christian Glaßer, Alan L. Selman, Liyu Zhang
#### The Informational Content of Canonical Disjoint NP-Pairs
We investigate the connection between propositional proof systems and their canonical pairs. It is known that simulations between proof systems translate to reductions between their canonical pairs. We focus on the opposite direction and study the following questions.
Q1: Where does the implication [can(f) \le_m can(g) => f \le_s ... more >>>
TR09-092 | 8th October 2009
Olaf Beyersdorff, Johannes Köbler, Sebastian Müller
#### Proof Systems that Take Advice
One of the starting points of propositional proof complexity is the seminal paper by Cook and Reckhow (JSL 79), where they defined
propositional proof systems as poly-time computable functions which have all propositional tautologies as their range. Motivated by provability consequences in bounded arithmetic, Cook and Krajicek (JSL 07) have ... more >>>
ISSN 1433-8092 | Imprint
|
{}
|
## What to do with a kid who's just slow?
How can we expose more people to critical thinking?
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
### What to do with a kid who's just slow?
OK, I need some advice. You've heard me brag about my kids more than you want to. But there's one of the bunch, my wife's son, who is in scholastic trouble, and I need some advice. I'm great with gifted kids but I have no idea how to deal with a really slow one.
He's 13 and a really nice kid, but his grades are atrocious. He's going to be repeating a grade this year for the first time; they should have held him back years ago but they kept booting him into the next grade. Now he's at an age where he needs all those rudiments and he just doesn't have them.
He's been tested, there's nothing wrong with him that they can find. He reads well; he's read Lord of the Rings several times, the whole series, and loves Harry Potter. He's even a moderately good chess player. But any sort of complex ideas just seem to go right over his head, especially in math or science.
I'm very worried that he's at a very high risk of not completing high school. I try not to force my methods on my wife's kids, but I am of the opinion that he simply watches too much TV, plays too many video games, doesn't pay attention in school and doesn't study. But I will be the first to admit that I don't understand scholastic problems at all.
We're going to adopt at least some of my ideas, that is, limit TV to an hour and video games to an hour, and supervised homework. We don't have the money to send him to special tutors or anything like that.
Suggestions are welcome, with thanks.
TruthSeeker
Posts: 420
Joined: Wed Jun 09, 2004 9:53 pm
Location: On a comfy chair with a sleeping cat
How does he explain his difficulties? Is he bored, not interested, working hard, frustrated?
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
TruthSeeker wrote:How does he explain his difficulties? Is he bored, not interested, working hard, frustrated?
Well, that's a good question. His mom deals with him mostly, he's sort of intimidated by me, though I try my best to connect.
When his mom asks him what the problem is he kind of just shrugs his shoulders and is uncommunicative. I think frustrated is probably closest to the truth.
Cloverlief
Posts: 5025
Joined: Tue Jan 08, 2008 10:59 pm
Location: Here, there or somewhere
Number 1: Fight like mad to have that child not held back. A good portion of kids who are held back do not graduate high school and do have self-esteem and embarrassment issues.
Number 2: Hire a good tutor or find a learning center and start this child in immediately with tutoring, don't wait until the first report card and you see that he has low grades, do it from day one. Have the tutor come to your house and work with the child on the troubled areas. Call you local community college or university who often have certified tutors available - it will cost you about $20 an hour depending on the tutor, but it is worth it in the long run. Number 3: Kids learn at their own pace, try to understand that and don't criticize him for it or hold up the other kids as better such as, "Soandso gets A's in math, why can't you be like Soandso?!" Chani Sundog Posts: 2576 Joined: Mon Jun 07, 2004 4:27 pm I appreciate the advice, but... Chanileslie wrote:Number 1: Fight like mad to have that child not held back. A good portion of kids who are held back do not graduate high school and do have self-esteem and embarrassment issues. Too late, and I really don't agree. He already has embarrassment issues arising from the fact that he can't do the work his classmates do. I feel it's essential that he master the things he's already supposed to know. I think he should have been held back years ago, maybe more than once. His esteem and embarrassment issues are the least of his problems. Number 2: Hire a good tutor or find a learning center and start this child in immediately with tutoring, don't wait until the first report card and you see that he has low grades, do it from day one. Have the tutor come to your house and work with the child on the troubled areas. Call you local community college or university who often have certified tutors available - it will cost you about$20 an hour depending on the tutor, but it is worth it in the long run.
Not an option, as I explained, and way too late for that. His grades have been atrocious for years.
Number 3: Kids learn at their own pace, try to understand that and don't criticize him for it or hold up the other kids as better such as, "Soandso gets A's in math, why can't you be like Soandso?!"
We don't do that. He has enough pressure just living in a house full of geniuses.
Learning at your own pace is fine and dandy, but there comes a time when that pace is just going to have to accelerate or he isn't going to make it.
Cloverlief
Posts: 5025
Joined: Tue Jan 08, 2008 10:59 pm
Location: Here, there or somewhere
Yep, might as well give up now because it sounds like you have already abandoned the kid. I feel for him.
Chani
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
Chanileslie wrote:Yep, might as well give up now because it sounds like you have already abandoned the kid. I feel for him.
Thanks a lot. Way to make generalizations about a situation you know a few paragraphs about, and insult a pair of parents who are exemplary in every way.
If that's the quality of advice I can expect from you, please keep it to yourself in the future.
I don't know why every person who is employed by a school seems to think they are an expert in how to educate children.
Generalisimo
Posts: 751
Joined: Wed Jun 09, 2004 12:01 pm
Location: over the numbers
Have you offered to assist him with his homework? You say you've mostly kept your distance. If you want him to graduate, you really should work on bridging that gap during the summer vacation.
Does it seem like he does fine in subjects he is motivated in? Or is he an all-around bad student? How bad are we talking here, anyway? Does he usually score in the 60s? In the 40s?
Talk to his teachers and see what their assessment of him is. They might not be right, but they are another point of view. Do his friends get good grades? Is he hanging out with a crowd that doesn't care about school?
Or, simply ask him. Ask him why he doesn't complete his homework or do well on tests. Ask him what he thinks of school, which subjects he likes and doesn't like, and whether he understands the significance of his being held back a grade.
Communication is key. If you can't talk to him, you're not going to help him. If he's still intimidated by you, ask his mom to pose those questions.
Either way, it's best to get the scoop right from him. If he won't talk to his parents, try to enlist the help of a trusted friend, teacher, bus driver, etc. Someone that he'll open up to.
[size=75][i]"It's rude to talk about religion, you never know who you're gonna offend."[/i][/size]
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
Pardon me, but I feel the need to elaborate after such a nasty comment from Chani.
There is no money in the house for $20 an hour tutoring. We just last year got the state to force the kid's deadbeat dad to pay child support. I've been raising her kids and mine, five in all, for over a decade on my salary alone. If you have the money to throw$20 an hour at the problem, good for you. Don't throw your hands in the air and say obviously we've abandoned the kid simply because we can't afford tutoring.
The only other thing I can figure that you're being so judgemental about is my statement that his grades have been atrocious for years. Do you simply assume we've tried nothing? My wife has tried to work with him until she's been reduced to tears. She doesn't know what else to do to help. You try to explain things to the kid and he just stares at you.
The only other point you raised, I simply disagree with. I am an educator too; your opinion on the subject is no better than mine.
No need to respond, and I don't want to argue with you anyway; I like your hubby. But think a little bit next time before spouting your half-baked assessments of a total stranger's situation.
Last edited by Sundog on Fri Jul 16, 2004 7:11 pm, edited 2 times in total.
TruthSeeker
Posts: 420
Joined: Wed Jun 09, 2004 9:53 pm
Location: On a comfy chair with a sleeping cat
Frustrated is good as it implies some motivation.
I agree with Generalisimo that he needs to talk. Maybe this isn't about school but something else like depression? An adult friend/relative that you trust might be ideal.
I do think he needs specialized one-on-one educational attention, as well. I wonder if there are options that would fit your budget. I know that here the public libraries and some of the community centres offer free tutoring. Does he have an older sib or cousin who could do it? Expert would be better in some ways, of course, but maybe just having an older role model he wants to impress might be enough to light his fire.
Good luck
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
Generalisimo wrote:Have you offered to assist him with his homework? You say you've mostly kept your distance. If you want him to graduate, you really should work on bridging that gap during the summer vacation.
My wife works with him. He's too intimidated to listen to me very well. I don't really understand why but it's true.
He's been in summer school every summer for years. This year they wouldn't enroll him in it because his grades were too bad; he simply has to retake the whole year.
Does it seem like he does fine in subjects he is motivated in? Or is he an all-around bad student? How bad are we talking here, anyway? Does he usually score in the 60s? In the 40s?
That's what I don't get. He isn't just dumb! He consistently failed 4 classes this year but gets B's in the others.
Talk to his teachers and see what their assessment of him is. They might not be right, but they are another point of view. Do his friends get good grades? Is he hanging out with a crowd that doesn't care about school?
We have. Half of them want him tested (we did, he's fine). The others say he just doesn't pay attention and doesn't hand in his work.
Or, simply ask him. Ask him why he doesn't complete his homework or do well on tests. Ask him what he thinks of school, which subjects he likes and doesn't like, and whether he understands the significance of his being held back a grade.
Communication is key. If you can't talk to him, you're not going to help him. If he's still intimidated by you, ask his mom to pose those questions.
Either way, it's best to get the scoop right from him. If he won't talk to his parents, try to enlist the help of a trusted friend, teacher, bus driver, etc. Someone that he'll open up to.
That's a good idea. I have mostly left this up to his mom because she's much better at communicating with him than I am, and I'm not particularly patient.
Last edited by Sundog on Fri Jul 16, 2004 7:10 pm, edited 3 times in total.
Lisa Simpson
Posts: 185
Joined: Wed Jun 23, 2004 3:04 am
Location: Irk
If it's just that he's not getting math concepts well, maybe he needs someone to explain them better to him. My sis-in-law 'gets' math concepts easily. But she can't explain those concepts well at all. She just looks at a problem and knows the answer. Her daughter (my niece) doesn't understand math well either. But we've have learned that I can explain math to her in a way she understands. Maybe that's what your son needs, too. Someone (a teacher, maybe) who can explain the concepts he should have learned years ago, in a way he can grasp. All kids learn in different ways.
Cutting down on the TV and video games won't hurt either. I had to cut them out entirely for my oldest. He just "forgets" to do his homework if the computer is around.
Generalisimo
Posts: 751
Joined: Wed Jun 09, 2004 12:01 pm
Location: over the numbers
Sundog wrote:There is no money in the house for for \$20 an hour tutoring.
I know this was not addressed to me, and you probably already know this so I apologize if I'm stating the obvious, but...
You might want to look around for a free tutor. Ask his teachers if they have an older high school student that would be willing to sit with him for even an hour or two a week. Or try a local college. Heck, you can probably get a college kid to help him out for a couple hours in exchange for a nice home-cooked meal.
Even the teacher might be willing to help. If he has any really good teachers, I'd bet they would stay late and help him for free, if they saw potential and that he was trying. It's that drive to help kids succeed that makes good teachers, and I've known of more than one that has worked on their own time for a kid that was worth it.
Just some thoughts, some areas to look for free help. I am not an educator, so feel free to tell me that all these ideas wouldn't work.
[size=75][i]"It's rude to talk about religion, you never know who you're gonna offend."[/i][/size]
TruthSeeker
Posts: 420
Joined: Wed Jun 09, 2004 9:53 pm
Location: On a comfy chair with a sleeping cat
Sundog wrote: You try to explain things to the kid and he just stares at you.
This is an expensive suggestion, but your sentence is provocative: Has he been tested for learning disabilities (like information processing problems) by a licensed neuropsychologist?
Sometimes, you can get reduced rates by having a trainee at a teaching hospital do the testing (they report to a licensed person so the interpretation is often even more careful as you have two brains looking at the data) or you can get free testing by enrolling in a research study that includes testing.
I have no idea if any of these resources are available to you. I'm just brainstorming.
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
TruthSeeker wrote:Frustrated is good as it implies some motivation.
I agree with Generalisimo that he needs to talk. Maybe this isn't about school but something else like depression? An adult friend/relative that you trust might be ideal.
I do think he needs specialized one-on-one educational attention, as well. I wonder if there are options that would fit your budget. I know that here the public libraries and some of the community centres offer free tutoring. Does he have an older sib or cousin who could do it? Expert would be better in some ways, of course, but maybe just having an older role model he wants to impress might be enough to light his fire.
Good luck
I think you two have hit on it, and I know just the person. He has a good rapport with his priest, ironically enough, and people in his church. I think I'll try to enlist their aid.
I like this idea. I feel like the kid just won't talk about what's wrong.
TruthSeeker
Posts: 420
Joined: Wed Jun 09, 2004 9:53 pm
Location: On a comfy chair with a sleeping cat
Sundog wrote:
I think you two have hit on it, and I know just the person. He has a good rapport with his priest, ironically enough, and people in his church. I think I'll try to enlist their aid.
I like this idea. I feel like the kid just won't talk about what's wrong.
Excellent!
Keep us posted.
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
TruthSeeker wrote:
Sundog wrote: You try to explain things to the kid and he just stares at you.
This is an expensive suggestion, but your sentence is provocative: Has he been tested for learning disabilities (like information processing problems) by a licensed neuropsychologist?
Sometimes, you can get reduced rates by having a trainee at a teaching hospital do the testing (they report to a licensed person so the interpretation is often even more careful as you have two brains looking at the data) or you can get free testing by enrolling in a research study that includes testing.
I have no idea if any of these resources are available to you. I'm just brainstorming.
He's been "tested" for learning disabilities, but now you have me curious. I'm going to find out just exactly what he has and hasn't been tested for.
Generalisimo
Posts: 751
Joined: Wed Jun 09, 2004 12:01 pm
Location: over the numbers
Sundog wrote:My wife works with him. He's too intimidated to listen to me very well. I don't really understand why but it's true.
Try to understand why. Maybe talk to his friends on the sly, or his friends' parents even. Maybe there is a part of your personality that keeps him distant, and you're totally oblivious to it.
Sundog wrote:That's what I don't get. He isn't just dumb! He consistently failed 4 classes this years but gets B's in the others.
Ah, so he can do well, but there are subjects he does not for whatever reason. Either he doesn't care, or he just doesn't get it. Perhaps he has a learning disability that only inhibits certain types of learning (i.e., math and problem solving, or reading comprehension). Or, as others have suggested, someone needs to take a different approach with him.
I took Calculus in high school. I did well, As and Bs on all my assignments. Got to college, and I struggled to comprehend what my idiot professor was saying. If I hadn't saved my notes from high school, I wouldn't have been able to pull the grades I did in Calculus I and II.
Then came Calculus III. The professor was decent, and tried to help me, but there were concepts I just couldn't get my arms around. I don't know why, and I had friends tell me that Calc III was easier than I and II. Not to me, it wasn't!
A year later, I took two Calculus-based Physics courses, and did very well in them. The professor had an uncanny knack for explaining concepts in ways I could understand. He was my favorite science teacher ever.
Anyway, the point of my babbling is this: sometimes, the same subject material taught a different way can make a world of difference.
Sundog wrote:That's a good idea. I have mostly left this up to his mom because she's much better at communicating with him than I am, and I'm not particularly patient.
Perhaps it is the impatience that keeps him distant. He doesn't think you'll hear him out, so he doesn't bother.
It's funny who a teenager will open up to, and who they won't. My volunteer work has me hanging around teens quite a bit. I'll dispense advice, and they'll take it. Later, I'll talk to their parents, and they'll be like "I've been telling him that for months! You tell him once and he does it." -- I usually reply with "of course, you're his parents, he's not supposed to listen to you!"
Developing a relationship with him is primary. But it wouldn't hurt to also develop relationships with those he opens up to.
(I see after I wrote this post, you've found someone you might be able to talk to. Good deal.)
[size=75][i]"It's rude to talk about religion, you never know who you're gonna offend."[/i][/size]
Viking Chick
Posts: 323
Joined: Sun Jun 20, 2004 7:49 am
I can't add anything to what has been said already - some excellent suggestions.
I just want to say that I applaud the way that you and your wife are exploring every avenue open to you, including looking to others for help, it seems to me that you have anything but given up on the kid, and that more kids could do with having the support that he obviously has. Some things are plain and simply worth more than any amount of money.
I hope it all works out well for you.
Sundog
Posts: 2576
Joined: Mon Jun 07, 2004 4:27 pm
Viking Chick wrote:I can't add anything to what has been said already - some excellent suggestions.
I just want to say that I applaud the way that you and your wife are exploring every avenue open to you, including looking to others for help, it seems to me that you have anything but given up on the kid, and that more kids could do with having the support that he obviously has. Some things are plain and simply worth more than any amount of money.
I hope it all works out well for you.
Thank you very much, and thank all of you. This is terra incognita to me. I appreciate the suggestions and I feel that some will actually help.
|
{}
|
# What is the square root of 12 times the square root of 6?
$6$
Multiplying square roots is just like normal algebraic multiplication. So we can just multiply $\sqrt{12}$ and $\sqrt{6}$ together:
$\sqrt{12} \times \sqrt{6} = \sqrt{12 \times 6} \rightarrow \sqrt{36} \rightarrow 6$
So the answer is $6$.
|
{}
|
$$\require{cancel}$$
# 4: Einstein Relativity
In the 19th century it was discovered that the Maxwell Equations describing electric and magnetic fields, a grand synthesis of the results of many different experiments, unlike Newton's laws of motion, are not consistent with Galilean relativity. A priori, the solution was not clear. One possible reason for this inconsistency, taken seriously at the time, was that the principle of relativity is wrong; i.e., there actually is an absolute rest frame, and our motion could be detected with respect to it with the appropriate experiment. Indeed, there was a significant experimental program to detect our motion with respect to absolute rest defined by a medium called "the ether."
Another possibility, is that while the principle of relativity holds, its specific implementation as Galilean relativity does not. As you know, because you have studied special relativity, this is indeed the correct solution to the puzzle of the Maxwell Equations lack of invariance under a Galilean transformation.
It turns out that the "Galilean Boost" can be generalized to a "Lorentz Boost" that is also consistent with the principle of relativity. The primed and unprimed coordinate systems constructed as before, under a Lorentz boost are related as:
\begin{aligned} t' & = \gamma (t-vx/c^2) \\ x' & = \gamma (x - vt), \\ y' & = y,\ {\rm and} \\ z' & = z. \end{aligned}
where $$\gamma \equiv 1/\sqrt{1-v^2/c^2}$$. In the limit that $$c \rightarrow \infty$$ this reduces to the Galilean boost. As can be easily shown (see the homework problem) the reverse transformation is the same rule with $$v \rightarrow -v$$. Most importantly, the Maxwell equations are invariant under this transformation.
One of the more spectacular consequences of the Maxwell Equations is that one of their solutions is waves traveling at the speed of light. If the Maxwell equations are correct in all inertial frames, then this implies that these waves will be moving at the speed of light in all inertial frames. To your Galilean intuition this is quite startling as it violates the simple rule for addition of velocities you derived in the previous chapter.
The result can be easily demonstrated from the Lorentz transformation above. Here we sketch out the process, and you can fill in the details by performing the exercise that follows. Imagine a particle traveling at the speed of light. Let's parameterize its path through spacetime with the independent variable $$\lambda$$ so that $$t = \lambda$$ and $$x(\lambda) = c\lambda$$. Then we have (by direct substitution into the Lorentz transformation) that $$t' = (\gamma/c)(c-v)\lambda$$ and $$x'=\gamma (c-v)\lambda$$. The speed of this particle in the primed frame is
$\frac{dx'}{dt'} = \frac{dx'}{d\lambda}\frac{d\lambda}{dt'} = \frac{dx'}{d\lambda}\left(\frac{dt'}{d\lambda}\right)^{-1} = c.$
Thus we see the Lorentz transformation tells us that a particle traveling at speed $$c$$ in one frame will be traveling at speed $$c$$ in another. This result is consistent with our claim that the Maxwell equations are invariant under the Lorentz transformation, since a consequence of the Maxwell Equations is that electromagnetic waves travel at speed $$c$$.
Box $$\PageIndex{1}$$
Exercise: 4.1.1: Fill in the steps in the above derivation.
\begin{equation*} \begin{aligned} \frac{dx'}{d\lambda} &= \gamma(c - v), \; {\rm and} \\ \\ \frac{dt'}{d\lambda} &= \frac{\gamma}{c}(c -v) \end{aligned} \end{equation*}
Therefore,
\begin{equation*} \begin{aligned} \frac{dx'}{d\lambda}\left(\frac{dt'}{d\lambda}\right)^{-1} = \gamma(c - v)\Big(\frac{\gamma}{c}(c -v)\Big)^{-1} = c \end{aligned} \end{equation*}
Unlike rotational coordinate transformations that preserve spatial distances between pairs of points, a Lorentz transformation does not. The spatial separation between $$(x,t)$$ and $$(x+dx,t)$$ is $$dx$$. The spatial separation between these points in the prime frame is $$\gamma dx$$, as one can see from the transformation rule. How can length depend on reference frame? Key to resolving this apparent paradox is the fact that in the primed frame the two events are not simultaneous. We won't go through sorting out these apparent paradoxes here.
We will, however, introduce a quantity that, unlike spatial length, is invariant under Lorentz transformations. For Cartesian spatial coordinates, the square of the invariant distance between event $$(t,x,y,z)$$ and event $$(t+dt,x+dx, y+dy, z+dz)$$ is given by
$ds^2 = -c^2 dt^2 + dx^2 + dy^2 + dz^2. \label{eqn:invdist}$
This quantity has the following two-part physical interpretation:
1. For $$ds^2 > 0$$, $$\sqrt{ds^2}$$ is the length of a ruler that connects the two events and is at rest in the frame in which the two events are simultaneous.
2. For $$ds^2 < 0$$, $$\sqrt{-ds^2}$$ is the time elapsed on a clock that moves between the two events with no acceleration.
Why is this quantity invariant under boosts? That's a deep question, and I'm not sure we have the fullest possible answer yet sorted out. We do know that the Maxwell equations are a synthesis from experiments, their form is invariant under a Lorentz transformation, and the Lorentz transformation preserves the invariant distance.
Box $$\PageIndex{2}$$
Exercise 4.2.1: Show that the invariant distance is indeed invariant under a Lorentz transformation. For specificity, take it to be the transformation appropriate for a boost in the $$+x$$ direction with speed $$v$$. For simplicity, take your two coordinate systems to be coincident at their origins (i.e. $$t=x=y=z=0$$ is the same point as $$t'=x'=y'=z'=0$$ ), use the origin as one point, and $$t=dt, x=dx, y=dy, z = dz$$ as the other.
So we start with $$ds^2 = -c^2dt^2 + dx^2 + dy^2 + dz^2$$, where, due to the Lorentz transformation we have
\begin{equation*} \begin{aligned} dt & = \gamma (dt'-vdx'/c^2) = \gamma/c (cdt'-vdx'/c), \\ dx & = \gamma (dx' - vdt'), \\ dy & = dy',\; {\rm and} \\ dz & = z' \end{aligned} \end{equation*}
Therefore,
\begin{equation*} \begin{aligned} ds^2 & = -\gamma^2\Big(cdt' - \frac{vdx'}{c}\Big)^2 + \gamma^2(dx' - vdt')^2 + dy'^2 + dz'^2 \\ \\ & = -\gamma^2(c^2 - v^2)dt'^2 + \gamma^2\Big(1 - \frac{v^2}{c^2}\Big)dx'^2 + dy'^2 + dz'^2 \\ \\ & = -\gamma^2 c^2\Big(1-\frac{v^2}{c^2}\Big)dt'^2 + \gamma\Big(1-\frac{v^2}{c^2}\Big)dx'^2 + dy'^2 + dz'^2 \\ \\ & = -c^2dt'^2 + dx'^2 + dy'^2 + dz'^2 =ds'^2 \end{aligned} \end{equation*}
Rather than the Lorentz transformation itself, the key thing to take away from this chapter is the definition of the invariant distance. We will be using it for the rest of the course, generalized to spacetimes with "curvature." Before doing so, we give some exercises here in which you get to make use of the invariant distance to solve problems in the more familiar context of a flat spacetime, the so-called Minkowski space you are familiar with from special relativity. A Minkowski space is simply a spacetime that can be labeled with $$t,x,y,z$$ such that the invariant distance is given by Eq. \ref{eqn:invdist}.
In Minkowski space, as one of the homework problems asks you to show, a finite (as opposed to infinitesmial) version of the invariant distance equation is also true:
$(\Delta s)^2 = -c^2 (\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$
for trajectories that are straight lines, with $$\Delta s \equiv \int d\lambda \frac{ds}{d\lambda}$$ also invariant under Lorentz transformations.
Box $$\PageIndex{3}$$
Exercise 4.3.1: Calculate the time that elapses on a clock traveling in a straight line at speed $$v$$ from $$x_1,t_1$$ to $$x_2, t_2$$. Do so in the following manner: 1) Draw the clock's path in two coordinate systems: $$x$$ vs. $$t$$ and $$x'$$ vs. $$t'$$ where the prime system is the one where the clock is at rest. 2) Calculate $$(\Delta s)^2$$ along the path from point 1 to point 2 in both coordinate systems, set them equal, and solve for $$t_2'-t_1'$$.
Note that here we have used these facts: 1) the time that elapses on the clock will be equal to the difference in time coordinates in the frame in which it is at rest, and 2) the invariant distance is invariant (the same in both coordinate systems). We could also have just calculated $$(\Delta s)^2$$ in the unprimed frame and used our physical interpretation of $$\sqrt{-(\Delta s)^2}$$ (for $$(\Delta s)^2 < 0$$ ) as the time that elapses on a clock traveling from point 1 to point 2.
For the first coordinate system we have
\begin{equation*} \begin{aligned} (\Delta s)^2 = -c^2(t_2 - t_1)^2 + (x_2 - x_1)^2 = -c^2(\Delta t)^2 + (\Delta x)^2 \end{aligned} \end{equation*}
For the prime system we have
\begin{equation*} \begin{aligned} (\Delta s')^2 = -c^2(t'_2 - t'_1)^2 \end{aligned} \end{equation*}
Now set them equal and solve for $$t'_2 - t'_1$$
\begin{equation*} \begin{aligned} -c^2(t'_2 - t'_1)^2 & = -c^2(\Delta t)^2 + (\Delta x)^2 \\ \\ (t'_2 - t'_1)^2 & = (\Delta t)^2 - \frac{(\Delta x)^2}{c^2} \\ \\ {\rm note} \; & {\rm that,} \; \frac{(\Delta x)^2}{(\Delta t)^2} = v^2 \\ \\ (t'_2 - t'_1)^2 & = (\Delta t)^2\Big(1 - \frac{v^2}{c^2}\Big) \\ \\ t'_2 - t'_1 & = \gamma^{-1}\Delta t \end{aligned} \end{equation*}
# HOMEWORK Problems
Problem $$\PageIndex{1}$$
Show, by solving for $$x$$ and $$t$$ that the inverse Lorentz transformation is the same as the forward transformation but with $$v \rightarrow -v$$. Explain what this has to do with the principle of relativity.
Problem $$\PageIndex{2}$$
Show that for straight paths in spacetime, that $$(\Delta s)^2 = -c^2 (\Delta t)^2 + (\Delta x)^2$$ follows from $$ds^2 = -c^2 dt^2 + dx^2$$. Hint: all straight paths in spacetime (at least the flat spacetime of special relativity we are studying now) can be parametrized via: $$t-t_0=\lambda, x=x_0 +v\lambda$$.
Problem $$\PageIndex{3}$$
Events A and B occur 10 meters and 100 ns apart in time in frame 1. If they occur 95 ns apart in frame 2, what must their spatial separation be in frame 2?
Problem $$\PageIndex{4}$$
Derive the phenomenon of time dilation. Consider the path taken by a clock from point 1 to point 2 in two different coordinate systems, a primed one in which the clock is at rest, and an uprimed one in which the clock is moving at constant speed $$v$$. Use the invariance of the invariant distance to show that the time elapsed on the clock is less than $$t_2 - t_1$$. [Yes, this is basically the same as one of the exercises.]
|
{}
|
Math Help - Challenging Trigonometry Question
1. Challenging Trigonometry Question
Triangle ABC is such that AC=BC and AB/BC = r
Show that cos A + cos B + cos C = 1 + r -(r^2/2)
2. Originally Posted by Ph4m
Triangle ABC is such that AC=BC and AB/BC = r
Show that cos A + cos B + cos C = 1 + r -(r^2/2)
My proof is a bit long winded. i'm sure someone else will come up with a better one soon.
See the diagram below:
what was described was an isoseles triangle, so we can further assume that:
(1) AD = BD
(2) $\angle A = \angle B$
Now from the triangle, we see that:
$\sin A = \frac {CD}{BC} \implies CD = BC \sin A$
also, $\cos A = \frac {AD}{AC}$
also, $\cos \left( \frac {C}{2} \right) = \frac {CD}{BC} = \frac {BC \sin A}{BC} = \sin A$
Further more, you should know the identity, $\cos \left( \frac {C}{2} \right) = \sqrt { \frac {1 + \cos C}{2}}$
$\Rightarrow \sin A = \sqrt { \frac {1 + \cos C}{2}}$
$\Rightarrow \cos C = 2 \sin^2 A - 1 = - \cos 2A = 1 - 2 \cos^2 A$
Now to start peicing this together:
Since $\angle A = \angle B$, $\cos A = \cos B$
Also, since $AB = 2AD$, $r = \frac {AB}{BC} = \frac {2AD}{BC} = 2 \cos A$
$\Rightarrow \cos A = \frac {1}{2}r$
and now we're done. just to say what we have to say:
$\cos A + \cos B + \cos C = 2 \cos A + \left( 1 - 2 \cos^2 A \right)$
$= 2 \left( \frac {1}{2}r \right) + 1 - 2 \left( \frac {1}{2}r \right)^2$
$= r + 1 - \frac {r^2}{2}$ as desired
3. Originally Posted by Ph4m
Triangle ABC is such that AC=BC and AB/BC = r
Show that cos A + cos B + cos C = 1 + r -(r^2/2)
$\cos^2 A = \frac{AC^2 + AB^2 - BC^2}{2AB\cdot AC} = \frac{AC^2 + r^2AC^2 - AC^2}{2rAC\cdot AC} = \frac{r^2AC^2}{2rAC^2} = \frac{r}{2}$
$\cos^2 B = \frac{AB^2 + BC^2 - AC^2}{2AB\cdot BC} = \frac{r^2AC^2 + AC^2 - AC^2}{2AC\cdot rAC} = \frac{r}{2}$
$\cos^2 C = \frac{AC^2 + BC^2 - AB^2}{2AC\cdot BC} = \frac{AC^2 + AC^2 - r^2AC^2}{2AC^2} = \frac{1-r^2}{2}$
I do not have time to finish the details, sorry.
4. Originally Posted by ThePerfectHacker
$\cos^2 A = \frac{AC^2 + AB^2 - BC^2}{2AB\cdot AC} = \frac{AC^2 + r^2AC^2 - AC^2}{2rAC\cdot AC} = \frac{r^2AC^2}{2rAC^2} = \frac{r}{2}$
$\cos^2 B = \frac{AB^2 + BC^2 - AC^2}{2AB\cdot BC} = \frac{r^2AC^2 + AC^2 - AC^2}{2AC\cdot rAC} = \frac{r}{2}$
$\cos^2 C = \frac{AC^2 + BC^2 - AB^2}{2AC\cdot BC} = \frac{AC^2 + AC^2 - r^2AC^2}{2AC^2} = \frac{1-r^2}{2}$
I do not have time to finish the details, sorry.
where did those equations come from?
5. Originally Posted by Jhevon
where did those equations come from?
Cosine of Laws.
6. Originally Posted by ThePerfectHacker
ah! ok, i guess i should have seen that
in that case, the cosines should not be squared (maybe that's what threw me off, the squares) and you should have:
$\cos C = \frac {2 - r^2}{2}$
then we would have:
$\cos A + \cos B + \cos C = \frac {r}{2} + \frac {r}{2} + \frac {2 - r^2}{2} = r + 1 - \frac {r^2}{2}$
Very nice TPH!
7. Hello, Ph4m
I'll finish what ThePerfectHacker started . . .
We have isosceles triangle ABC with $AC = BC$ and $AB/BC = r$
Then the triangle looks like this:
Code:
C
*
/ \
/ \
a/ \a
/ \
/ \
A * - - - - - * B
ar
The sides are: . $\begin{Bmatrix} a & = & a \\ b & = & a \\ c & = & ar\end{Bmatrix}$
From the Law of Cosines, we have:
. . $\cos A \:= \:\frac{b^2 + c^2 - a^2}{2bc} \:= \:\frac{a^2 + a^2r^2 - a^2}{2a\!\cdot\!ar} \:=\:\frac{1}{2}r$
. . $\cos B \:= \:\frac{a^2 + c^2 - b^2}{2ac} \:=\: \frac{a^2 + a^2r^2 - a^2}{2a\!\cdot\!ar} \:=\:\frac{1}{2}r$
. . $\cos C \:= \:\frac{a^2 + b^2 - c^2}{2ab} \:= \:\frac{a^2 + a^2 - a^2r^2}{2a\!\cdot\!a} = 1 - \frac{1}{r^2}$
Therefore: . $\cos A + \cos B + \cos C \:=\:1 + r - \frac{1}{2}r^2$
|
{}
|
# Dosbox Config File
Use DOS/32A to prevent save. It lacks many of the hardware emulations of DOSBox (no joysticks, basic VGA). C: raceaid. I managed to compile entire source code (with just some minor changes, and one typo fix) and run the exec in the emulator (WINSCW). Now drag and drop the zip file you just downloaded from Abandonware DOS into the D-Fend window. 9ghz processor (which is what I use with very little hiccups in performance). rpm for Tumbleweed from openSUSE Oss repository. The games setup program is shown below. ONFIG:Loading primary settings from config file dosbox. Defaults for these options can also be set by creating a file named jupyter_notebook_config. 7Likes, Who? dreamer_21 Oct, 2019. (CVE-2019-7165 by Alexandre Bartel) Added a basic permission system so that a program running inside DOSBox can't access the contents of /proc (e. Where can I increase the files to 130. In order to utilize them, you can define which patches are provided by the DOSBox release in the Emulator Settings dialog. conf file from here to your ~/xcom directory; Modify that file as follows: Change [sdl] settings; fullresolution=original; windowresolution=original; Change [dosbox] setting to enable higher resolution; machine=svga_s3. DB Turbo focused on speed and win9x emulation… I wanted to see if I could take things in a different direction. I set the path for savefiles ect. conf file, will be in a location such as E:\Program Files (x86)\Origin Games\Theme Hospital\data\Game\DOSBox Open this file with a Text editor The file is self explanitory, but the lines you are looking for are about the fullscreen and resolution. This article has been revised for DOSBox 0. *If you want multiple personalised conf files read the dosbox readme to know how to proceed, but for most users the default config file will do fine*. (equivalent to "files=" in config. Works cool with Project Tempest emulator. * If loading exe/com/bat the system directory will be searched for a 'dosbox. "filelocation" is located on the local drive, not a mounted drive in DOSBox. Once it’s created, head to that directory and open it up. nGlide is a 3Dfx Voodoo Glide wrapper. You can edit the file and add the mount line at the very end in the autoexec area. conf has a section at the end where you can put DOS and DOSBox commands and they will be executed each time DOSBox starts. Click on the Local FIles tab and select "Browse Local Files". WikiExt monitors and provides timely. Running a Game. Where is the settings button?? (or How can I access the settings menu)? A. dosbox sudo nano dosbox-SVN. Choose soundblaster for sound, default settings (actually, I'd recommended choosing no sound, but that'd defeat the purpose of the CD version). Whatever commands we may need, we will put them in batch files and execute them (by typing the name of each batch file) after DOSBox starts. 8b which is a DOS based software in windows 7 and windows 8 on 32 and 64 bit systems. config=129:0 in your dosbox. The CONFIG file type is primarily associated with NET by Microsoft Corporation. The default configuration file was created, on my system, under ~/. THIS TUTORIAL IS USEFUL FOR ONLY THOSE RUNNING WINDOWS VISTA OR LATER. Step-by-step with images. h Go to the documentation of this file. of the DOSBox configuration file so that DOSBox will activate it at start. conf"”, and should point to the location of. sys file which are not supported in XP. Tracking time spent on tasks. dosbox, in your home folder. There, in 0. i select my printer, hit okay, nothing happens. In the Input Settings, after the manuall bind, use the Save autoconfig entry. conf file (the configuration file for DOSBox); it should be located in the DOSBox directory. BAT section, which if you remember your DOS days, allows you to specify the command(s) you wish to run when the system starts up. -Dosbox Turbos default directory path to the dosbox. Since most folk are using 7. Dosbox config? Resolved! :-) Im trying to play WC3,i downloaded and copied files to 4 cd's When i run from cd the installation (config) runs at the moment the sfx dont work only the music. The Nome geophysical survey is located in western Alaska in the Nome mining district, about 825 kilometers west of Fairbanks, Alaska and just north of Nome, Alaska. I was actually looking for a main. In MoM’s case, this required using the brilliant MS-DOS emulator, DOSBox. sys and autoexec. config file at runtime (C# VB. conf is a configuration file that DOSBox can use globally and/or locally per game (and settings that are left out are taken from the global file). You can also use variations of this to do some interesting file transfers, but it has some important limitations. you can download the DMG file from the below link and make sure that you have met all the basic system that requires to run the Mac OS X Mavericks on your Mac system without any hassles. There's a chance that D-Fend will. conf exists in your ~/. /dosbox directory. rpm for Tumbleweed from openSUSE Oss repository. 31 CP/M, use Tim Mann's mkdisk to reset the write protect attribute of the file, so you can use CONFIG to change the Drive Parameters and save the settings. conf settings. /proc/self/mem). I managed to compile entire source code (with just some minor changes, and one typo fix) and run the exec in the emulator (WINSCW). Java Config will verify that properties in such a file are valid, and provides a large number of useful methods to read a large variety of common data- types (from int s and boolean s to filesizes, durations and MessageFormat patterns). The DOSBox window will display a line "Cpu Cyles: max" at the top then. ” In Properties, select the Security tab. conf" (exactly the same name which the file had you renamed to dosbox_*****_old. Once you have your new dosbox. # Possible values: lowest, lower, normal, higher, highest, pause. map, which is located on the pi in the ~/. VMware ESXi (formerly ESX) is an enterprise-class, type-1 hypervisor developed by VMware for deploying and serving virtual computers. Download docker-machine-0. config/) location by setting the XDG_CONFIG_HOME environment variable, according to the XDG Specification. conf but you can see where it creates it on any version of Windows you run just by running DOSBox for the first time and looking at the DOSBox status window. As with most emulators ho. bat and the config. rpm for Mageia 7. You must not use autoexec commands in your dosbox. Instead add the "boot -l c" command under the mount command as shown. Windows 8 and Windows 10: Press the Windows key + Q, type in dosbox, and the options file DOSBox 0. rpm for Cooker from OpenMandriva Unsupported Release repository. This speeds up or slows down the virtual CPU of DOSBox. Warning: This part of Slax website is deprecated. DOSBox, in simple English, is a free program that emulates an X86 based DOS environment on your new computer including speaker sounds, video graphics and other hardware. for handling shortcuts. D-Fend Reloaded is a successor of D-Fend (now discontinued). conf file which can be used to adjust settings later. Type sudo gedit dosbox. Edit the DOSBox conf file used to launch the game (Ex. This would. They all need specific options to be set on or off. [dosbox]: memsizekb, memalias. If the mouse drivers are not loaded properly, the mouse will not work. conf, which is located in the hidden folder,. Rename this to server, or somesuch. sys file which are not supported in XP. C: raceaid. For a normal Dosbox configuration you can directly edit the main Dosbox. Use fullscreen=false for DOSBox in a window. 02 Thursday Feb 2017. It makes managing multiple DOSBox configuration files easy by offering a clean interface, shortcuts and a graphical launcher. The configuration file controls various settings of dosbox: The amount of emulated memory, the emulated soundcards and many more things. Appending the commands to mount the Puff directory, change into the Puff drive/directory, and run the Puff program to the dosbox. That's really all there is to creating and using custom configuration files. As you add more old DOS games to your PC, you will encounter some titles that need specific configuration options in order for them to run optimally (or at a. conf" and then press the "Enter" key. 2-dev Sound library for Simple DirectMedia Layer 1. 74 on a Win 7 platform? If so, where do I need to locate them? On my Windows C:\, on my DOSBOX c:\ or somewhere else? My config. The Steam releases are run using DOSBox 0. Open the file dosbox. 3 du 10/01/2008 Page 6/25. But you want to use dosbox turbo, to understand things, too. We no longer need to mount the boot disk each time, so that can be removed from the autoexec section of the config file. In a million years, I would never have imagined the version of dosbox I had originally was buiggered. The auotexec. Create shortcut to run command "dosbox. I am using dosbox 0. Wii Downloads Applications; Homebrew; Applications. At the moment, if you want to have a different. Save the the configuration file by selecting File >> Save. I suggest u make it on all DLL files! 3. conf files so, in most cases, there shouldn't be a need for a shell script. net: Prince of Persia. INI: (and here is shows in the [TCPIP] - section: "DisableDHCP" instead of the "Disable Automatic Configuration". # They are used to (briefly) document the effect of each option. conf, url to icons are included as yoshikun database , use and manipulate, edit. DOSBOX conf 파일 변경 - d드라이브를 가상 c드라이브로 인식 시켰다. It is user interface for game. - Serial mouse emulation. Find and open the dosbox. For reference, by default the DosBox. Game data Configuration file(s) location. DOSBOX conf 파일 변경 - d드라이브를 가상 c드라이브로 인식 시켰다. With DOSBox closed, find the dosbox-[version_number]. conf’ I keep my old DOS stuff in /users//msdos. Whenever you log out or reboot your computer, you will have to mount the directory again using the commands above. DosBox is an open-source tool and is lightweight to install. It futher allows acces to AUTOEXEC. For ease of use, several graphical front-ends have been developed by the user community. Step 4: Type dosbox in terminal to open it. There are several versions of this cable, the adapter is an RS232 to RS422 cable, make sure your cable has the adapter on the end, otherwise you will have issues. dosbox" directory on GNU/Linux systems) using your preferred text editor and increase the memsize value to 64, the maximum that DOSBox will allow. conf) and remove the autoexec command that mounts the overlay folder. DOSBox includes an emulator to allow two or more people on the same network to play multiplayer games via UDP. Look at Section 13: The configuration (options) file. A highly optimized and fast DOSBox port for Android with unique control system for playing anywhere you are without need of external hardware. Some other game titles may dislike the raising of this value however, so keep this in mind. SDL1 and SDL2 builds of DOSBox-X can not use the same mapper file, or they will likely malfunction. Once you've got it installed you then run DOSBox and use the mount command - type mount c c:\games\warcraft (replacing c:\games\warcraft with wherever you installed the. conf In the [sdl] section, edit line 34 of dosbox-SVN. Visit the development page to find out how to keep up to date with the latest improvements. Change Directories and Run Your DOS Program. The path was slightly changed to avoid spaces in the default configuration filename and to group all configuration files in a single. This feature is not really that important in programming assembly because you can write your codes in the notepad or any text editor applications. The default configuration file was created, on my system, under ~/. Same goes for bat files: You are not allowed to use a "p. Exit the mapper (and DOSBox) and the mapper file, mapper-SVN. All settings of the DOSBOX v. You can help protect yourself from scammers by verifying that the contact is a Microsoft Agent or Microsoft Employee and that the phone number is an official Microsoft global customer service number. INI: (and here is shows in the [TCPIP] - section: "DisableDHCP" instead of the "Disable Automatic Configuration". TXT, so you get "dosbox. I managed to compile entire source code (with just some minor changes, and one typo fix) and run the exec in the emulator (WINSCW). 1] but can not modify the plugins folder [No. It tries to make creating DOSBox configuration files a little easier by offering a simple interface, some shortcuts and a little bit of intelligence. # memsize -- Amount of memory dosbox. Support for so-called Booter games. Update: emulators/dosbox to 0. Custom DOSBOX controls file for The Elder Scrolls: Arena enabling full modern-style controls while retaining the ability to type text as needed. exe [or the. EXT) for Windows files that have long filenames, but it cannot display the actual long filename for those files. properties file. conf Copy this file to the parent folder, e. We use our prefered packet manager to install the latest version of DOSBox:. exe above works, and it will read the config file made, and it will deal with the files= issue if you set it properly in the [dos] part of the dosbox config file. D-Fend will automatically launch the Import archive file window. 3) Go to the very end and add: mount c ~/dos Now you should have a working DosBox install! Enjoy! Some good games to try out are Skyroads, Command & Conquer or Elder Scrolls I. Prefs file and doesn't recognise the 'conf' command when I try to remind the app where the config file is located. The configuration options are described in the official DOSBox wiki. The auotexec. As far as I know, the only way to start it up is to type all of the code in after opening up DOSbox, and then it will start. 3-x86_64-1dj. How to change. vDos is for serious (mainly text mode) DOS applications, Windows 32/64 bits (XP and later) only. 74 (including all currently avail language files) and the FreeDos commandline tools for use in PortableApps. Step 2: Type sudo apt-get install dosbox Step 3: When it is installed download TC and extract it and paste it in home directory. Please note that some settings (such as menu driver switching) requires to restart Retroarch to take effect. You can define the dosbox config file on the command line - dosbox -conf filename - So if you want to be able to use it with both mac and non-mac systems you could copy the existing config file to a new name and change the above usecancodes parameter and then use the -conf command line directive to use that file when using a mac. That does present us with a problem. We do this so that more people are able to harness the power of computing and digital technologies for work, to solve problems that matter to them, and to express themselves creatively. conf file that is now in the dos folder you just created. DO NOT put it in Program Files or user directories. One of the tweaks we should perform is making DOSBox automatically mount the directory assigned to it. will be loaded. The binary for dosbox. Download dos2unix-7. ★ REQUIRES PAID APP: DosBox Turbo. When i send from hyperterminal i get the errors. If you're curious and/or adventureous you could edit the new. exe" -conf dosboxSynd. conf for the main DOSBox, that won't affect games any more, as they have their own. 74, you’ll run the following command: $nano ~/. (Please use the latest version of DOSBox) # Lines starting with a # are comment lines and are ignored by DOSBox. You can edit the file and add the mount line at the very end in the autoexec area. You can also type HELP in DOSBox itself to get a refresher on those good old DOS commands. Free Editors' rating. DosBox Manager serves as a Game/Profile Manager (similar to D-Fend Reloaded) add-on for DosBox Turbo (required). Click on the Dosbox setup file, then click "Extract". Petit dosbox is designed for Mac OS X systems, features a nice and easy to use interface and has many configuration options, including a Game Manager to let games to be launched straight from the program interface. DOSBox is available for a variety of platforms, including Windows, Linux, Mac and others. You made a. The final MouseMove coordinates have to be converted to the DosBox window resolution. map, which is located on the pi in the ~/. DOSBox; Obtain the DOSBox emulator and install onto your Mac; Configure DOSBox; Copy the XCom_Better. You have three easy ways of accessing the DosBox Turbo Settings Menu. dosbox, in your home folder. If you adjust your file explorer's settings, you can disable "hide file extensions of known file types", and you'll see extensions. zip (8/6/98) 1. If you want to run them with your own copy of DOSBox, just treat the files installed the same way you would any other DOS game you installed manually, creating new shortcuts for them and ensuring the path values (both for the program and the "start in" folder) reference your main DOSBox installation. Dos romset Dos romset. desktop -file to run programs via their own config files. Files which can be opened by DOSBox. You have to go into Dosbox then, at the the command prompt in Dosbox type "config -writeconf dosbox. Type "set" in your dosbox prompt to see your soundblaster configuration (IRQ, DMA, etc). conf file from here to your ~/xcom directory; Modify that file as follows: Change [sdl] settings; fullresolution=original; windowresolution=original; Change [dosbox] setting to enable higher resolution; machine=svga_s3. which does exactly what it describes as well. As with most emulators ho. Type "dm2" and press enter. conf As you have previously renamed the folder, it is here: E:\goggames\Secret Agent\DOSBOX\dosbox. There's a chance that D-Fend will. As it is a 16 bit program I need dosbox for it to run on these systems. I managed to compile entire source code (with just some minor changes, and one typo fix) and run the exec in the emulator (WINSCW). conf, which is located in the hidden folder,. It’s full of tips and tricks. config file at runtime (C# VB. Upon installing Microsoft Office 4. NET code) Hey , its a great topic but you will face with a problem when you don't have write permission on that config file If you work in a corporate company and you don't have any permission to modify files under IIS , you should find an another way. So it´s propably done for me. It tries to make creating DOSBox configuration files a little easier by offering a simple interface, some shortcuts and a little bit of intelligence. Here is the link to that wiki page - http://www. You can define the dosbox config file on the command line - dosbox -conf filename - So if you want to be able to use it with both mac and non-mac systems you could copy the existing config file to a new name and change the above usecancodes parameter and then use the -conf command line directive to use that file when using a mac. This folder will be mounted as C: Drive in DOSBox so the EXE, COM, or BAT files can be executed. conf As you have previously renamed the folder, it is here: E:\goggames\Secret Agent\DOSBOX\dosbox. Pre-Configured Arcade set-up to have you quickly up and running with the latest Hyperspin, Rom Sets, Emulators, Front-End Media and more. command 0. On Windows click the edit configuration file shortcut. 51 & Dosbox 0. DOSBox will treat the DOS 6. Dosbox for Fedora (64-bit) Free. So Notepad helpfully (*snicker*) tacks on a. This speeds up or slows down the virtual CPU of DOSBox. As with most emulators ho. If the mouse drivers are not loaded properly, the mouse will not work. If needed, they can be added to your game’s configuration file: Show the contents of the game’s gamebox in Finder. The location is indicated by the Linux. Upon installing Microsoft Office 4. In cases like this, telling Steam to open the game with Boxtron will make the game automagically Just Work™ (assuming you have dosbox installed). Click "DosBox Settings" and click "Autoexec" if you want the to put the mount commands (mount lines). If you are on vista it must be the above directory. IV/Play is a front-end for command-line MAME ™. The file that you've downloaded and installed has the 'edit' executable file. TXT extension. dosbox/ with the name ‘dosbox-0. Instead, you can use the following file to preset the resolution, start the game and then adapt it in the game menu. COM -wcd -all Writing config file dosbox-0. Type "set" in your dosbox prompt to see your soundblaster configuration (IRQ, DMA, etc). THIS TUTORIAL IS USEFUL FOR ONLY THOSE RUNNING WINDOWS VISTA OR LATER. This mode is equivalent to running a DOSBox binary with the specified file as the command line argument. conf, where 0. The command is as given below. Magic Dosbox now supports two locations for settings – private and public. Download curlew-0. In the AUTOEXEC. conf file from the folder of the app you're running. Download here. Figure 12: The Installation Configuration Summary screen. Even the D-Fend config files are compatible with the current program. When i typed at the dosbox end command prompt: type autoexec. If you want it to run faster, use Notepad (do not use Word!) to edit the file c:\Users\Your_Name\AppData\Local\DOSBox\dosbox-0. I have a couple of questions first,I dont wish to use these games from a cd yet my games are mostly on cd's ,can I just copy the game cd files to a folder and use that folder to run the games. * If loading exe/com/bat the system directory will be searched for a 'dosbox. Upon installing Microsoft Office 4. All the files necessary to run EFB processors on a 64-bit PC. bat file, or edit c:\config. Import games Importing games can be done using drag 'n' drop or going through the File menu. conf file from here to your ~/xcom directory; Modify that file as follows: Change [sdl] settings; fullresolution=original; windowresolution=original; Change [dosbox] setting to enable higher resolution; machine=svga_s3. xlsx), and multimedia files. MAMEUI64 is the x64 GUI version of MAME on the Windows platform. Use the arrow keys up and down and enter to change an option. rpm for Lx 3. submitted by /u/dreamer_ Worlds First Zero Energy Data Center. conf and the [serial] section and set it up there. This is all that is listed for configurations. Scroll to the bottom, and look for the section headed [autoexec] -- this is where you're going to add the instruction to auto-mount the C:\dosgames directory each time DOSBox launches. conf file and make use of new shaders, filters and other improvements. MAMEUI64 is the x64 GUI version of MAME on the Windows platform. Many of the older DOS programs required an Autoexec. To launch the game in Windowed mode: Edit the ProfileOptions_profile file in the following location: My Documents\BioWare\Dragon Age Inquisition\Save or C:\Users\[your username]\Documents\BioWare\Dragon Age Inquisition\Save. Dosbox requires massive CPU use for proper emulation, I cannot recommend any of the below settings if you run anything less than about a 1. Place the config file in DOSBox directory, then create a shortcut for DOSBox. h' and the directory 'files' have been removed. go to dosbox installation directory (on my machine that is C:\Program Files (x86)\DOSBox-0. As you add more old DOS games to your PC, you will encounter some titles that need specific configuration options in order for them to run optimally (or at a. conf" and then press the "Enter" key. You can choose other config files with the corresponding flag, so make sure to configure the right dosbox. 1] but can not modify the plugins folder [No. I am using MS-DOS 1. 이젠 게임 실행마다 일일이 인식 시킬 필요가 없음. File Viewer for Android is an easy-to-use file viewer and file manager that can open over 150 file types, including PDFs, Office documents (. Now for PC games made around 1992 or later this setting is not all that important. Whatever commands we may need, we will put them in batch files and execute them (by typing the name of each batch file) after DOSBox starts. 74 (including all currently avail language files) and the FreeDos commandline tools for use in PortableApps. Drag and drop the zip game file into D-Fend. Leave it on the screen that it is on and continue to the next step. Step 5: Create a configuration file by typing this command on dosbox. You can use a custom config file for each game that you setup a DOSBox Shortcut for. 73 and earlier, you will select "configuration" then "Edit configuration". rpm for Cooker from OpenMandriva Unsupported Release repository. There are several versions of this cable, the adapter is an RS232 to RS422 cable, make sure your cable has the adapter on the end, otherwise you will have issues. The third line of the script should read:. I use vDos and DosBox on a notebook. In the bottom pane, you will be able to preview game screenshots and videos, add notes, and more. Download libretro-bsnes-performance-20170303-1-omv4001. When you open the mapper file in an editor, you can find the code that emulates the Gravis gamepad at lines 119 to 126 (which appear after the main keyboard section). 2) Click on the path at the top of the window, that’s the bar at the very top that shows you the path or address of the 3) Delete anything in this box and enter the following exactly (without the quotation marks) “. CONFIG -writeconf filelocation: 2 include/dos_inc. DOSBox offers advanced users a variety of config settings to tweak, but we're going to keep it simple and show you the bare minimum you need to get Duke Nukem 3D working on a Windows 7 machine. It is not clean solution, but works. Now you need to create a folder to MOUNT as your C: drive and hold. This guide will use the 0. stream Stanza The stream stanza contains the attributes that the audit start command uses to set up initial stream mode auditing. conf In the [sdl] section, edit line 34 of dosbox-SVN. if i run the command again, nothing happens at all. # # Makefile # # usage: 'make [package]' # # This makefile builds all of the Native Client packages listed below # in$(PACKAGES). Tag: Wii File Search: Type: Title: Date: Downloads: Wii: ThemeShooter v2. map file per game, you have to create a new DOSBox configuration for each game and set the. If a user is provided with a malformed. bat file and a Config. In the Input Settings, after the manuall bind, use the Save autoconfig entry. zip (8/6/98) 1. rpm for Cooker from OpenMandriva Unsupported Release repository. The game compatibility should be identical to 0. - Open the Windows Command Prompt (not DOSBox) - Move to the Printfil install folder: CD \"Program Files"\Printfil or CD \"Program Files (x86)"\Printfil - Type the DOSBox configuration command: Printfil DOSBox LPT1 - Printfil detects vDosPlus and autoconfigures it - Now open vDosPlus - Start a DOS program and print to LPT1:. 0a: Flash Player 8r24: S3 Trio64 Bios: SoundBlaster 16: Internet Explorer 5. conf file (the configuration file for DOSBox); it should be located in the DOSBox directory. nt & Autoexec. 2, development files adep: libasound2-dev shared library for ALSA applications -- development files adep: autotools-dev Update infrastructure for config. The below letters are in bold and are from dosbox 0. 7x Options should appear in the search results, click it Windows 8 and Windows 10 : Press the Windows key + Q, type in dosbox , and the options file DOSBox 0. In a sense, D-Fend Reloaded is a powerful DOSBox-oriented explorer. It's just a proof-of-concept, as I believe, there are many possible ways to make running DOS games easier (with their own settings). Creation and Location Windows. In other words, the C: drive in DOSBox is completely separate from the C: drive on your computer. config/) location by setting the XDG_CONFIG_HOME environment variable, according to the XDG Specification. So if you have installed version 0. It is user interface for game. Once your joypad have been configured manually, you can generate an autoconfig file. 74 Options". For a normal Dosbox configuration you can directly edit the main Dosbox. Those little bits of code and configuration that can make a modern retro gaming screen look cell shaded, dotted like an old school CRT monitor, or even curved and blurred like an old TV tube. img” goes into A: and “games1. nGlide is a 3Dfx Voodoo Glide wrapper. Most commonly, this is used to mount CD-ROM images. Download DOSBox 0. To learn what file types can be opened by DOSBox please visit WikiExt. Alone In The Dark version V1. How to change. When installing the game under DOSBox, the guide provided by Bethesda recommends SoundBlaster 16 for the music and sound emulation. exe” file and select “Open With …. -Due to these changes, you will need to modify the directory path back to the old default. Try to run it but all the colours are distorted and everything. The commands present there are run when DOSBox starts, so you can use this section for the mounting. conf Rename this file to something like E:\goggames\Secret Agent\dosbox_secret_agent_old. As with most emulators ho. Once there, add the commands like you would in the DOSBox window. Templates support. CONFIG -set "section property=value" CONFIG -get "section property" -writeconf localfile Write the current configuration settings to file. it's emulating a pc with dos as operating system and the speed of the emulation can reach the typical pc's from the early and mid 90s (dependent from the machine which is running dosbox). Download and install DOSBox accepting all default options - just keep hitting Next. Once this is done, close DOSbox and copy the entire folder to a new folder. It makes external file from the print command, and also can be set to pick the file immediately and print it. bdf files are supported. sys) # con device use int 16h to detect keyboard input: If set, use INT 16h to detect keyboard input (MS-DOS 6. gz ("inofficial" and yet experimental doxygen-generated source code documentation) config. It lacks many of the hardware emulations of DOSBox (no joysticks, basic VGA). 11 to allow my old man access to some old files he’d created from the mid 1990s, Word and Excel. DosBox Turbo typically handles the first bit of the autoexec file for you, mounting the SD. core=auto to. It should open a notepad window with the config file in it. 0\Common Files\M200\text" Thanks in advance. Here is @belek666 poc build of DOSBox for ps2 from svn src code. Notepad does not give a crap, it will not add formatting and will not rename the file. conf and hit enter. conf file, and open it in Notepad. map file in the game-specific. Afterwards, the dosbox. One convenient part of this is that you can create different config files for different programs. At the moment, if you want to have a different. For DosBOX, I made a proof-of-concept. DOSBox configuration file parameters The parameters below control advanced features of the DOS emulation. Far Manager XLat switching support) CPU. As with most emulators ho. cfg file in the WC2 folder. It is important to take note of this conf file the game is using for its main config. Once you've created the new configuration file, you'll probably want to create a copy of the DOSBox shortcut and modify the target so that it uses your new configuration file: The "-userconf" option from the original shortcut has been replaced with "-conf "U:\mpayne\DOSBox\conf\MS-DOS 6. txz: a tool to dump and debug bootable CD-like images: dupedit-5. sys configuration, specially the setting "FILES" must be big enough (let´s say 200) for your application, but "original" dosbox does not provide a way to adjust this setting. In this case you need to set your cycles amount to fixed 50000, or 50%. The Steam releases are run using DOSBox 0. This mode is equivalent to running a DOSBox binary with the specified file as the command line argument. You can use real hardware synthesizers with dosbox. The functionality of the MS-DOS/EM-DOSBOX emulator within the Internet Archive environment has more caveats than a gym membership, to be sure – dropping to the prompt currently freezes it. I have a little script invoked by a launcher on my Fedora desktop which will copy the file to the plotter que, and then delete the file. Any changes to the dosbox. Type sudo gedit dosbox. 80 Apk Paid Full Latest Version is a Tools Android app. 73 and earlier, you will select "configuration" then "Edit configuration". Tag: Wii File Search: Type: Title: Date: Downloads: Wii: ThemeShooter v2. PS3 DOSbox configuration files. run "DOSBox 0. It makes managing multiple DOSBox configuration files easy by offering a clean interface, shortcuts and a graphical launcher. cpp), and that is gonna to use default config. Here is the link to that wiki page - http://www. Files which can be opened by DOSBox. The config file location is displayed when you start DOSBox, so start it and look at the window with messages. SDL1 and SDL2 builds of DOSBox-X can not use the same mapper file, or they will likely malfunction. !IMPORTANT! Without it the prints will be blank!!!!! 4. GitHub Gist: instantly share code, notes, and snippets. File Viewer for Android is an easy-to-use file viewer and file manager that can open over 150 file types, including PDFs, Office documents (. This page was generated automatically from ConEmu sources. For example a user can modify the "Games" folder [No. I executed DOSBox from the Slackware Xfce menu: APPLICATIONS MENU → SYSTEM → DOSBox. But for many games that were created during the 1980's this setting…. DB Turbo focused on speed and win9x emulation… I wanted to see if I could take things in a different direction. It also ships with dosbox 0. Edit the configuration file of DOSBox and change the option fullscreen=false to fullscreen=true. And then Exit to error: Cannot allocate memory of 16MB followed by emulator crash shortly due to PANIC ALLOC :). Once you've got it installed you then run DOSBox and use the mount command - type mount c c:\games\warcraft (replacing c:\games\warcraft with wherever you installed the. If set to "auto" (default), DOSBox-X will warn only if at least one file handle is still open, or you are currently running a guest system. Text Message Picture Not Loading Android You cannot track Text Message Picture Not Loading Android net carbs on the app, although you can track your total carb intake and your total fiber intake. Mgc file is exported magic dosbox layout for game. Scroll down the dosbox. Use fullscreen=false for DOSBox in a window. Step 8: Navigate to the game folder. If the game runs too slow, try playing a The LBA2 movies might run crippled. 63 MB Claw-Mail Graceful, and sophisticated interface Easy configuration, intuitive operation Abundant features Extensibility Dosbox: 2,569: OMAP. TXT extension. Dosbox config? Resolved! :-) Im trying to play WC3,i downloaded and copied files to 4 cd's When i run from cd the installation (config) runs at the moment the sfx dont work only the music. These settings can be changed by opening the DOSBox Options from the Start menu. i select my printer, hit okay, nothing happens. Discussion in 'Other Emulators' started by JediKnight007, Oct 17, 2017. put the folder with your dos game inside that 'dos' folder. 62 Updates dosbox from 0. Warning: This part of Slax website is deprecated. When running my DOS app in a DOS window under Win XP, I used Config. Nothing will work until you edit this file, since you need to specify which real NIC DOSBox should use. Show comments. How do I use the new Profile Manager? ** Note, please update to the latest 2. Mageia Core aarch64. Edit config file DOSBOX Megabuild like this. When the game settings had to be changed under DOS (using another executable) Adjust game speed with DOSBox. This page was generated automatically from ConEmu sources. nt for program settings. - Higher memory limit to 512MB. Game data Configuration file(s) location. Is the folder set to read-only? You could also ask on the official DOSBox forums at Vogons. I got typical DOSBOX copyright message (from sdlmain. Dos2unix is used to convert plain text from DOS (CR/LF) format. command 0. Download JEMM578B. 74 Configuration File; S3 Trio64: DirectX 8. sys file that your software may want to write to. launches DOSBox with the. From Landscape Mode: from close to the top margin, swipe down to show the ActionBar. Move this folder into your Applications folder. Magic DosBox 1. It seems to totally ignore any changes I make to the default DOSBox. Drag and drop the zip game file into D-Fend. conf" (exactly the same name which the file had you renamed to dosbox_*****_old. WikiExt monitors and provides timely. Settings: Features. In this config file you can force Dosbox to use the custom key mappings file that you made in step 4. Try to run it but all the colours are distorted and everything. txz: a utility to manage. The file is downloaded as an executable installer, so simply run the installer Step 2 – Create a DOSBox folder –. But you want to use dosbox turbo, to understand things, too. You can do this in DOSBox config or pressing Ctrl+F11 / Ctrl+F12. Note: The target file system for bin1, bin2, and trail files must be different from the root (/) file system so that these files do not occupy the root (/) file system completely. A list of available options can be found below in the options section. If you want dosbox to. Download it free!. 74 Configuration File; S3 Trio64: DirectX 8. If you have a version of the game that has is pre-packaged with its own copy of Dosbox, such as the Steam or GOG release, you will need to open its respective. I have specified in the. Support for so-called Booter games. Free Editors' rating. I've only run a handful of games but so far it seems to be working well. Using DosBox and WinXP together to run Allen Bradley PCIS software, this is for communicating to an SLC100 or 150 and using an PCC cable. DOSBox on my computer and these settings work fine. 0's primary file takes around 236. - Various Screen Modes added to Advanced Config [OLD CRT/TV Mode + Super Smooth Mode] - Setting USB JoyPad for OpenPandora in Virtual Custom Mapper - Starting Dos Navigator from DosBox EX GUI [handy to install a game from ISO/CD file] - Music / MODULES DosBox JukeBox Mode - Music CD Audio Player Mode. The Autoexec and Config files were what both DOS and the Windows 9. Export game-list to a file; Automated build system, and more. First, I change the line that says fullresolution=original to fullresolution=1920x1080, which is my monitor's native resolution. conf, open it with notepad. ) Scroll all the way down to the bottom of the page, and leave one blank line and type as follows: EDIT (May 12,2010) In the new version of DosBox, 0. Some other game titles may dislike the raising of this value however, so keep this in mind. sys configuration, specially the setting "FILES" must be big enough (let´s say 200) for your application, but "original" dosbox does not provide a way to adjust this setting. If you want dosbox to. if it's embedded into a downloaded archive and it gets opened in a file browser) arbitrary commands could. launches DOSBox with the. This package contains D-Fend Reloaded 1. Dos2unix is used to convert plain text from DOS (CR/LF) format. Friday, Sep 4, 2020. Windows 8 and Windows 10: Press the Windows key + Q, type in dosbox, and the options file DOSBox 0. 72, which caused some audio stuttering. There change the "midiconfig" attribute from "midiconfig= 0 " to "midiconfig= 1 ". When i typed at the dosbox end command prompt: type autoexec. Please read the manual for tips on running DosBlaster and DosBox. conf, where 0. Then go into the installation program and the game will skip straight to the sound configuration settings. 8 mb: ll the files necessary to run CEFB. # pause is only valid for the second entry. A list of available options can be found below in the options section. Same goes for bat files: You are not allowed to use a "p. Both will take you to a notepad file. tells DOSBox to mount the Quest for Glory IV game folder as the "C:" drive. With DOSBox closed, find the dosbox-[version_number]. CONFIG -writeconf filelocation: 2 include/dos_inc. DosBox Turbo typically handles the first bit of the autoexec file for you, mounting the SD. Scroll down to the very bottom and type steps 1, 2 and 3 into the document and save it. but for some reason my Dosbox-X wasn't using the conf files in that folder. {js,py}] charset = utf-8 # 4 space indentation [*. # EditorConfig is awesome: https://EditorConfig. ROM-DOS supports full file access on standard Desktop FAT32 disks, the FAT16 hard disk format used for today's Compact Flash cards, and the FAT12 used for the smallest types of removable media like floppy disks. For example a user can modify the "Games" folder [No. conf is a configuration file that DOSBox can use globally and/or locally per game (and settings that are left out are taken from the global file). In a million years, I would never have imagined the version of dosbox I had originally was buiggered. Once your DOS games are configured in DosBox Manager, its very easy to setup or start them, or alter their configuration. The wine integration code builds up a dosbox config file executes dosbox then wipes out the file. for handling shortcuts. Since version 58 can people switch between private and public location on first page in “Welcome screen”. Débuter avec DOSBox et les jeux DOS Version 5. ” In Users, select “Edit. I'm using Windows XP for my BBS, so DOSBox will create the default config file in: C:\Documents and Settings\\Local Settings\Application Data\DOSBox\dosbox-SVN. org # top-most EditorConfig file root = true # Unix-style newlines with a newline ending every file [*] end_of_line = lf insert_final_newline = true # Matches multiple files with brace expansion notation # Set default charset [*. This folder will be mounted as C: Drive in DOSBox so the EXE, COM, or BAT files can be executed. tsk) is XML. # D-Fend Reloaded will delete this file from temp directory on program close. PS3 DOSbox configuration files. Place the config file in DOSBox directory, then create a shortcut for DOSBox. conf file is a good thing to know, because you don't have to change the default Dosbox. I can't use the ":" key because I can't change the layout. The DOSBox Raw OPL (DRO) format is used for storing captured OPL data from a game running in DOSBox. map, will be created in the ~/. Figure 12: The Installation Configuration Summary screen. dosbox sudo nano dosbox-SVN. config/) location by setting the XDG_CONFIG_HOME environment variable, according to the XDG Specification. Also note the link to the DOSBox. That folder contains a dummy autoexec. Drives can be mounted from the command line in DOSBox, from batch files or shortcuts used to launch games in DOSBox or from the [autoexec] section of the dosbox. Scroll down to the very bottom and type steps 1, 2 and 3 into the document and save it. WikiExt monitors and provides timely. conf file (as described above). dosbox : dosbox Exit dosbox: exit. Download dos2unix-7. It is also possible(and in many cases desireable) to mount disk images in DOSBox, using the imgmount command. conf is created automatically in the Windows' user profile folder. 22: Windows Installer: Boot Disk Win98C: Ultrasound. conf in a basic text editor like Notepad. zip (12/99) 0. Templates support. dosbox, in your home folder. 74 | How to use it!? -Tutorial by Jayant Bhawal YOU DO NOT NEED TO USE DOSBOX IF YOU ARE ON WINDOWS XP(x86|32-bit) OR EARLIER. In cases like this, telling Steam to open the game with Boxtron will make the game automagically Just Work™ (assuming you have dosbox installed). You must start a new game for the map changes to take effect. sys to edit the config. Publisher: SPARAL Downloads: 17,507. The final MouseMove coordinates have to be converted to the DosBox window resolution. Create a custom shortcut. # EditorConfig is awesome: https://EditorConfig. \$ dosbox Then at the DOS prompt, type: Z:\> config -wc dosbox. I set the path for savefiles ect. With this configuration file, the fullscreen setting will override what is in your standard DOSBox configuration file, while any other settings will be loaded from the default configuration file. This configuration should produce good and smooth game performance on most computers. 04, I need to open the sound settings and choose the correct output device. parallel1=file dev:lpt1 Port capturing:. 63 MB Claw-Mail Graceful, and sophisticated interface Easy configuration, intuitive operation Abundant features Extensibility Dosbox: 2,569: OMAP. Thanks Matthew. 도스박스 화면 캡쳐하기. ONFIG:Loading primary settings from config file dosbox. The wiki says this should be in the same folder as the dosbox executable, but I suspect this only holds true for windows installations - my dosbox executable is in /usr/local/bin and there's no other files in there. It needs to mount the filesystem, so change the directory to where the game is located, and run the game. Whilst DOSBox often works perfectly well with default settings, you can set up a configuration file to tweak them as you want/need to. The location is indicated by the Linux. The below letters are in bold and are from dosbox 0. Theme means new look for magic dosbox application, it can override existing texts and images.
|
{}
|
# Find the length of the curve y=(1/(x^2)) from ( 1, 1 ) to ( 2, 1/4 ) [set up the problem only, don't integrate/evaluate]
this is what i did.. let me know asap if i did it right..
y = (1/(x^2))
dy/dx = (-2/(x^3))
L = integral from a to b for: sqrt(1+(dy/dx)^2)dx
L = integral from 1 to 2 for: sqrt(1+(-2/(x^3))^2)dx
L = integral from 1 to 2 for: sqrt(1+(-2/(x^3))(-2/(x^3)))dx
L = integral from 1 to 2 for: sqrt(1+(4/(x^6))dx
a=1
b=2
n=1-
deltaX=0.1
f(x)=sqrt(1+(4/x^6))
L = integral from 1 to 2 for: sqrt(1+(4/(x^6))dx
L = (deltaX/3)[ f(1) + 4f(1.1) + 2f(1.2) + 4f(1.3) + ... + 2f(1.8) + 4f(1.9) + f(2) ]
L = (0.1/3)[ sqrt(1+(4/1)^6) + 4sqrt(1+(4/1.1)^6) + 2sqrt(1+(4/1.2)^6) + 4sqrt(1+(4/1.3)^6) + 2sqrt(1+(4/1.4)^6) + 4sqrt(1+(4/1.5)^6) + 2sqrt(1+(4/1.6)^6) + 4sqrt(1+(4/1.7)^6) + 2sqrt(1+(4/1.8)^6) + 4sqrt(1+(4/1.9)^6) + sqrt(1+(4/2)^6) ]
L = (0.1/3)[720.937]
L = 24.031
1. 👍
2. 👎
3. 👁
4. ℹ️
5. 🚩
|
{}
|
# Control Your eLearning Environment: Exploiting Policies in an Open Infrastructure for Lifelong Learning
Juri Luca De Coi
Philipp Kärger
Arne Wolf Koesling
Daniel Olmedilla
Pages: pp. 88-102
Abstract—Nowadays, people are in need for continuous learning in order to keep up to date or to be upgraded in their job. An infrastructure for lifelong learning requires continuous adaptation to learners' needs and must also provide flexible ways for students to use and personalize them. Controlling who can access a document, specifying when a student may be contacted for interactive instant messaging, or periodical reminders in order to increase motivation for collaboration are just some examples of typical statements that may be specified by, e.g., learners and learning management system administrators. This paper investigates how existing work in the area of policy representation and reasoning can be used in order to express these statements while at the same time obtaining the extra benefits policies provide (e.g., flexibility, dynamicity, and interoperability). This paper analyzes existing policy languages and integrates one of them as part of a demonstration of its feasibility in providing more advanced and flexible eLearning environments.
Index Terms—Policies, eLearning, lifelong learning, Protune, rule, reactivity.
## Introduction
Society and current labor market evolve rapidly. Nowadays, a learner is potentially any person in the world, who wants to learn or keep up to date on any specific topic, be it at work or in any other facet of her life. Therefore, there is a growing need for more flexible and cost-effective solutions allowing learners to study at different locations (e.g., at home) and at times that are better arranged with their working hours. In addition, learners do not necessarily work alone but may collaborate with or contact other persons, learners, or tutors. Systems addressing these requirements must allow users to have a big flexibility in the way they use the system, how they collaborate, how they share their content, and so forth. Controlling who can access a document, specifying when a student may be contacted for interactive instant messaging, or periodical reminders in order to increase motivation for collaboration are just some of the examples of typical statements that may be specified, for instance, by learners and learning management system administrators.
Research performed in the area of policy representation and reasoning allows for very expressive languages in order to specify statements that learners, course designers, or administrators can use to enhance their interactions with learning agents and management systems. Furthermore, lately, there has been extensive research that provides not only the ability of specifying these statements but also advanced mechanisms for reasoning over, exchanging, and exploiting them [ 1], [ 2], [ 3], [ 4], [ 5], [ 6], [ 7]. This paper focuses on the use of policies, a well-defined flexible and dynamic approach in order to specify and control the behavior of complex and rapidly evolving infrastructures for lifelong learning. It also explores how the integration of a policy framework can increase the flexibility of the interactions and collaborations learners have with learning agents and management systems, therefore enhancing their experiences and learning. The work presented in this paper builds on [ 8] and adds the following contributions:
• More detailed scenario and analysis of requirements.
• Extended comparison among existing policy frameworks.
• Description of the syntax and semantics of the PRovisional TrUst NEgotiation (Protune) framework as well as its architecture.
• Integration of the Protune framework into a Web-based demonstration of the scenarios described in this paper.
• Experimental results on performance of the policy evaluation process.
The rest of this paper is structured as follows: First, Section 2 identifies sample situations in which the specification of policies would increase the flexibility of the interactions and collaborations as well as enhance the learners experience. These examples show that dynamicity and ease of use are a crucial requirement, both being two of the main characteristics of policies. An introduction into the area of policy representation and reasoning, including a definition of the term policy as well as the characteristics of policies, is provided in Section 3. The benefits of the integration of policies into learning management systems and personal learner agents in order to support advanced scenarios are described in Section 4, as well as the out-of-the-box benefits of their exploitation. In addition, Section 5 analyzes existing policy languages and frameworks in order to present an overview of available solutions to the reader. It provides a comparison of their main features as well as their advantages and disadvantages from the perspective of their integration into lifelong learning infrastructures. It also introduces the formalization of policies using a selected policy language (Protune) and describes some of the added benefits of its use, such as negotiations and advanced explanations. The architecture of the selected policy framework, its integration into an online demonstration as well as a performance evaluation is presented in Section 6. Finally, related work is presented in Sections 7 and 8 concludes this paper.
## Motivation Scenario
Alice holds a master's degree in computer science and works successfully in a company. Recently, Alice was assigned the task of managing a new project starting in a couple of months, and therefore, she would need to learn and refresh her knowledge on project management. Since she has a full-time job including many business trips, she uses an online learning client that allows her to improve her competence whenever she has some available time. With this learning client, she is able to collaborate and to send questions or answers to other learners or tutors, and therefore, she is able to chat with other students and even participate in a social network. However, since she uses her chat tool also for her job she restricts her chat facility in a way that during working time only business contacts and other employees of her company can start a conversation, therefore, allowing other students to contact her only in her leisure time. Of course, students trying to contact her during working time get a brief explanation of why a conversation is not possible at that very moment and which even indicates when Alice can be contacted.
Within the program Alice is following, she accesses different learning activities and objects through her learning client. Some of this material is free of charge, but a couple of learning activities she is interested in are offered each one by a different content provider that sells it. Since the material is sold at a good price, she decides to purchase it. Each provider tells Alice that either she has to have an account or she has to provide a credit card for payment of the learning activity. For the first provider, she does have an account and provides her username and password. Therefore, she retrieves the requested material. However, she does not know the second provider and she must disclose her credit card. Alice protected her credit card in a way that it would only be disclosed to providers she may trust and the learning client provides a mechanism by which a content provider and Alice can trust each other even if they have not had any transaction in common before.
The learning client Alice is using allows her to share exercises and other relevant documents stored in her computer (e.g., using a peer-to-peer network [ 9], [ 10] or uploading them to a server) with other students following the same program or within the same learning network. She may even create some new material out of what she learned and her experience at work. She specifies which documents are to be shared and which conditions other students must fulfill in order to be able to retrieve it (e.g., being part of the same program she is enrolled or being a tutor). Even if lifelong learning means that you can learn whenever you like, the human factor still plays an important role in the setting, regarding motivation and exchange of information. Therefore, inactivity may yield the danger of not keeping up with her learning group. In order to ensure the success of the students, the learning client includes a personalizable agent. Among other uses for this agent, Alice can create some guidelines in a way that the agent reminds her when she has to finish some learning activities or sends her an e-mail when she has been inactive for more than a week.
Bill is a tutor and online course designer for the university, in which Alice attends many of her online courses with the help of her learning client. In his role as tutor, Bill specifies in the system that any of his learners not having any activity during more than two weeks should be sent a message or notification asking whether she needs additional help.
Bill, in his role of course designer, could also specify in the courses he creates that some parts of the course are shown with more or less information based on the learner accessing it (e.g., whether she has already tried the formative assessment of the course) or even dynamically link to different kinds of contents based on the information the learner provides at the time she is accessing the course (e.g., provide online games [ 11] in case the learner notifies she prefers [ 12] those kind of learning resources).
Due to all these flexible facilities and all their personalization and configuration possibilities, Alice is able to finish her program successfully.
## Policies—A Brief Introduction
This section briefly introduces what the term policy refers to and the advantages of policies with respect to more conventional approaches. It also describes the features a policy framework would ideally provide, which will be used in Section 5 as part of the comparison criteria, in order to select the framework that better meets the requirements of a lifelong learning scenario such as the one presented previously ( Fig. 1).
Figure Fig. 1. Sample policies in an open and flexible lifelong learning infrastructure.
### 3.1 The Concept of Policy
The term policy can be generally defined as a "statement specifying the behavior of a system," i.e., a statement that describes which decision the system should take or which actions it should perform according to specific circumstances. Policies are encountered in many situations of our daily life: the following example is an extract of a return policy of an online shop: 1
Any item for return must be received back in its original shipped condition and original packing. The item must be without damage or use and in a suitable condition for resale. All original packaging should accompany any returned item. We cannot accept returns for exchange or refund if such items have been opened from a sealed package.
With the digital era, the specification of policies has emerged in many Web-related contexts and software systems. E-mail client filters are a typical example of policies. The following policy is an example of an e-mail filter addressing spam:
If the header of an incoming message contains a field "X-Spam-Flag" whose value is "YES," then move the message into the folder "INBOX.Spam." Moreover, if this rule matches, do not check any other rules after it.
Table 1. Examples for Policies: 1) A Security Policy, 2) A Privacy Policy, and 3) A Business Rule
Specification of policies using a policy language yield many advantages compared to other conventional approaches: they are dynamic, typically declarative, have normally well-defined semantics, and usually allow for reasoning over them. In the following, all above-mentioned policy properties will be thoroughly described.
#### 3.2.1 Dynamic
The description of the behavior of an agent or other software component is usually built in the component itself. The main drawback of this design choice is that whenever the need for a different behavior arises and new code for that behavior is created, it typically requires the recompilation and reinstallation (or update) of the software. A more reusable design choice should provide a component with the ability of adapting its behavior according to some dynamically configurable description of the desired behavior. In this case, as soon as the need for a different behavior arises, only the description of the behavior will need to be replaced and not the whole component. Being policies, as mentioned above, "statements specifying the behavior of a system," in order to change the behavior of a policy engine (i.e., a component able to enforce policies) by simply replacing the old policy with a new one, would be enough.
#### 3.2.2 Declarative
The traditional ( imperative) programming paradigm requires programmers to explicitly specify an algorithm to achieve a goal. On the other hand, the declarative approach simply requires that programmers specify the goal, whereas the implementation of the algorithm is left to the support engine. This difference is commonly expressed by resorting to the sentence "declarative programs specify what to do, whereas imperative programs specify how to do it." For this reason, declarative languages are commonly considered a step closer to the final user than imperative ones. Policy languages are typically declarative and policies are typically declarative statements, and as such, they can be more easily defined by final (possibly non-computer experts) users. The policies listed in Table 1 are declarative as well: for instance, the first one does not explain which steps the process of reviewing and approving a firewall configuration consists of but simply asserts under which circumstances they have to be reviewed and approved.
#### 3.2.3 Well-Defined Semantics
A language's semantics is well defined if the meaning of a program written in that language is independent of the particular implementation of the language. Logic programs and Description Logic knowledge bases have a mathematically defined semantics; therefore, we assume languages based on either of the two formalisms to have well-defined semantics. Programs written in a language provided with a well-defined semantics are easily exchangeable among different parties since each party understands them in the same way. On the other hand, natural language sentences are ambiguous and can be interpreted differently. Policies with well-defined semantics, therefore, have advantages over policies written in a natural language as the ones provided in Table 1.
#### 3.2.4 Reasoning
The term "reasoning" refers to the possibility of combining known information in order to infer a new one, like in the following example.
If it is known that "all humans are mortal" and that "Socrates is human," one can infer that "Socrates is mortal."
On the one hand, it is true that the sentence "Socrates is mortal" is different than the ones preceding it, but on the other hand, it is clear that, according to the common sense, one can deduce (i.e., infer) the third sentence from the first two. The inferred information is referred to as implicit knowledge, since it was not explicitly available before. In the context of declarative programs, statements a program consists of can be reasoned over in order to infer new statements. Reasoning applied to the third policy in Table 1 allows one to deduce that (for instance) if
• John is a person,
• John owns a car,
• the car is a tangible personal property,
• John uses the car in his daily work (and therefore, he uses it for the production of income), and
• the car has a taxable value of less than $500, then John is entitled to an exemption from taxation of the car. ### 3.3 Desiderata of Policy Languages Formally specified policies generally have the features described in the previous section. Nevertheless, current policy languages assume that the frameworks enforcing policies written in such policy languages support more advanced features. A list of such additional features is given next: • Positive versus negative authorization. Policies specifying conditions under which resources can be accessed may be of two types: positive or negative. Positive authorization policies specify that if the conditions are satisfied, some authorization is granted, e.g., "access is granted if the requester is a member of the company." Whereas negative authorization policies specify that if the conditions are satisfied, some authorization is denied, e.g., "access is denied if the requester is a member of a foreign company." Positive/Negative policies retain the natural way people express policies; nevertheless, it can be argued that the specification of negative authorizations complicates the enforcement of access control in a system [ 3] and comprises the extra complexity of having to deal with conflicts. A conflict situation arises, whenever there are policies applicable for the same situation: one granting authorization and the other denying it. In the context of security and access control, a typical approach is assuming that access to any resource is denied by default and only positive authorization policies are defined stating which resources are allowed [ 2], [ 4]. The reason is that the cost of disclosing a sensitive resource is much higher than the cost of not disclosing a nonsensitive one. However, for frameworks where these conflicts may arise, different conflict resolution strategies are provided [ 1], [ 5] (e.g., static detection at the specification time or at runtime with precedence metapolicies). • Negotiations. In traditional access control or authorization scenarios, only one party is able to specify policies that the other one has to conform to. Typically, only one of the interacting parties is enabled to specify the requirements the other has to fulfill, whereas the other has no other choice but satisfying them (and thereby being authorized) or not (and thereby not being authorized). An example for this classical approach is the payment process in current online stores: the shop specifies what a customer has to provide (e.g., a credit card number) in order to purchase a product, but this is a one-way street: the customer has no means to require some constraints the shop has to satisfy too (e.g., trust evidences of the shop). Therefore, a more expressive approach allows both parties to discuss (i.e., negotiate) in order to reach an agreement. In an online buying process, both the selling and the buying parties want the transaction to be successful (because both parties can take advantage from a successful transaction) and therefore are willing to make every possible effort leading to a successful transaction. This kind of transaction may require support for policy-driven negotiations. Furthermore, negotiations allow for some policies to be private, possibly being dynamically disclosed to other parties based on the satisfaction of some conditions. • Evaluation and actions. The evaluation of a policy is a process that checks whether it is satisfied or not (that is, whether it holds or not). The infrastructure enforcing a policy is in charge of its evaluation. Typically, a request or an event causes a policy to be evaluated in order to check what kind of behavior (e.g., grant access, move to spam folder, and so forth) has been triggered. In order to evaluate a policy, some actions performed by the policy infrastructure might be required. Examples are a query to legacy systems (e.g., a database storing who is a member of a given company) or other sources (e.g., the checking of a credit card's validity at an external Web service), and the sending of evidences (e.g., certificates, digital driver's license) in order to certify some properties of a party. Another common action is the retrieval of environmental properties like the current system time (e.g., if access is allowed only in a specific time frame) or location (e.g., share files with learners in the same meeting room). • Explanations. It should be possible to generate explanations [ 13] out of the policies and the decisions they make. On the one hand, they help a user check whether the policies she created are correct, and on the other hand, they inform other users about why a decision was taken (or how the users can change the decision by performing a specific action). For example, if a student tries to contact Alice during her working time, that student would rather appreciate receiving a message like "I am not available from 8:00 a.m. to 5:00 p.m." instead of "I am not available." Or if Alice discloses her credit card number to a content provider and it is not accepted, a message like "This credit card is invalid because it is expired" would be more useful than simply "Invalid credit card." • Strong/lightweight evidences. The result of a policy's evaluation may depend on the identity or other properties of a requester such as age, membership in a certain club, and so forth. Therefore, a policy language should provide a means to communicate such properties. Usually, this is done by sending digital certificates called strong evidences. Typical strong evidences are credentials signed by trusted entities called certification authorities. Lightweight evidences [ 14] are nonsigned declarations or statements (e.g., license agreements). As an example, the driver's license number maintained as an integer value is a lightweight evidence. A digital version of the driver's license that can be submitted to a certification authority in order to prove that it has been signed by the government and that contains certified properties (e.g., address, date of birth, and so forth) is a strong evidence. It is important that a policy framework allows for both kinds of evidences. • Ontologies. As stated above, policies will be exchanged among entities within the lifelong learning infrastructure. Although the basic constructs may be defined in the policy language (e.g., rule structure and semantics), policies may be used in different applications and even define new concepts. Ontologies help provide well-defined semantics for new concepts to be understood by different entities. ## Using Policiesfor Lifelong Learning As it has been previously presented, policies are a flexible means to describe how a system should act, depending on certain conditions and events. In this section, we show that with all the described features a policy framework can serve as a behavior control for an open eLearning environment and as a flexible means for learners to personalize the eLearning system and tailor it to their specific needs. In our scenario, when Alice uses an open learning environment, she implicitly exploits policies (statements controlling the behavior of her system) in several ways. She makes use of restrictions for incoming chat connections or conditions under which Alice's locally stored resources (either documents or her credit card number) may be disclosed, content providers specifying whether a resource is free of charge or at cost (and the payment methods together with business rules like, e.g., discounts), or general statements indicating how some entity (e.g., a software agent) should react to a specific event. An open lifelong learning infrastructure must provide sufficient functionalities in order to support all these situations. Aligned to the scenario provided in Section 2, the rest of this section describes how policies and their features can be used in order to cater learners sufficient flexibility and adaptivity. ### 4.1 Personalizing the Flow of Communication As described in our scenario, communication is an important part of learning and due to the Web technology communication via chat, email subscription, and so forth, it is supported by almost all eLearning applications. For example, in our scenario, Alice's eLearning client offers a chat tool in order to induce the communication between her, her fellow students, and her tutors. However, such a chat tool might overload the user because of too many chat requests. Some of them may be spam, some may concern urgent issues about Alice's business trips, others may be requests from her fellow students, and so on. Policies offer a means to adapt the control of Alice's chat client and therefore cope with the plethora of chat requests. Since policies are declarative Alice can easily define who (i.e., business contacts, fellow students, and so forth) may send her a chat message at what time (i.e., leisure time, during business trip, and so forth) as well as whom she considers as business contact or as friend. At the same time, since the policy framework included in the eLearning client offers explanations, each requester who wants to chat with Alice during working time receives an explanation about why it is currently not possible to chat with her. As we showed above, policies are also suitable for the filtering of emails: automatic replying or forwarding, removal of spam, and so forth. Therefore, policies ease the handling of the communication flow and take the burden of browsing through all the incoming messages away from the learner. ### 4.2 Authorization and Trust Policies allow users and systems to characterize new users or systems by their properties and not simply by their identity (crucial in an open environment where complete strangers may interact with each other). This enables a new kind of access control in an open environment. For example, if Alice defines in a policy that any user of her community can access a certain set of the resources stored on her computer, this access right does not need to be adapted when the community changes or new people join. When performing a purchase through her eLearning client, Alice can exploit the policy-driven negotiation functionality her eLearning client provides. Negotiations can be (semi)automatically performed among entities driven by their policies [ 15]. She may have protected her credit card with a policy. This policy states that she is willing to disclose it only to content providers, which are certified by the Better Business Bureau (BBB) and therefore considers such providers as trusted. The BBB seal program guarantees that its members will not misuse or redistribute disclosed information. Following Alice's example, a negotiation is needed, because—in order to let Alice prove that she has a valid credit card that can be charged—the content provider has to prove first its certification by the BBB. This negotiation is depicted in Fig. 2. Figure Fig. 2. Sample negotiation sequence between Alice and the learning content provider. In the previous example, if the content provider does not accept Alice's credit card, she would get a detailed explanation helping her to know what to do next in order to successfully complete the buying process. A statement like "This credit card is invalid because it expired" would be indeed more useful than a simple deny as "Invalid credit card." Aligned to our eLearning scenario, Alice might want to exploit negotiations in order to decide which fellow students are entitled to access the resources she created and stored on her computer, thereby, requiring other trust evidences from them, such as some digital society membership credential. As shown, policies especially provide sophisticated methods to control who accesses resources. This is particularly helpful if a learner wants to share homeworks or lecture notes with fellow students. According to certain properties of the requester, access to this information may be granted or not. Therefore, authentication plays a remarkable role in lifelong learning. Actions are a desired feature in this context: a fellow student who wants to access Alice's exercises may have to provide a digital membership certificate first proving that she enrolled to the same course. Sending this certificate could also be integrated into the specification of the policy via an action. ### 4.3 Motivating and Triggering Learning Policies can be used to control the behavior of software agents in order to directly react to a learner's behavior or learning progress. Agents can send automatic notifications via chat messages. They may also drive or define the rules of electronic games and simulations for educational purposes or many other approaches in order to increase a learner's motivation while learning. Due to the declarativity of policies, all these behaviors can be configured and driven via policies based on the learner's properties and behavior. The automatic chat message in our scenario is one example. Alice wants to receive such a message in case she was inactive for more than a week. The triggering can be done via a policy language enabling actions: the sending of a chat message itself is not part of the policy framework, but it can still be called via an action construct in the policy language. ### 4.4 Personalizing the Content Presentation Adaptivity of eLearning content has been the focus of research for some decades already (see [ 16]). By using policies, the adaptive behavior of the system and, in particular, the adaptive presentation of learning content can be defined in a declarative way. In our scenario, Bill may specify that certain items in his eLearning course are shown or hidden, according to the results of a formative assessment of a learner or provided at runtime by the learner (therefore, even allowing for partial profiles stored at the user machine). If institutions agreed on profile models (e.g., competence descriptions), this could even lead to solutions for exchange of partial user profile information between different systems the learner is using (either distributed profiles and/or partial profiles stored at the learner machine). ## Policy Frameworksand Their Features In this section, we first briefly introduce the most prominent policy languages. Then, we provide a comparison of these languages in order to select one that provides the required and desired features described in Section 3. At the creation time of this paper, a variety of policy languages have been developed and are currently available. Out of those, we chose the most popular and widely used languages: • Ponder [ 3] is a policy language that was meant to help local security policy specification and security management activities; therefore, typical addressed application scenarios include registration of users or logging and audit events, whereas firewalls, operating systems, and databases belong to the applications targeted by the language. • Web Services Policy Language (WSPL) [ 6] supports description and control of various aspects and features of a Web service. • KAoS [ 1] addresses Web services and general-purpose grid computing, although it was originally oriented to software agent applications where dynamic runtime policy changes need to be supported. • Rei [ 5] was primarily designed to support pervasive computing applications. Such applications are meant to be run on mobile devices that use wireless networking technologies to discover and access services and devices. • PeerTrust [ 2] is a simple yet powerful language for trust negotiation on the Semantic Web based on distributed query evaluation. • Protune [ 4] supports trust negotiation as well as a broad notion of "policy" and does not require shared knowledge besides evidences and a common vocabulary. • eXtensible Access Control Markup Language (XACML) [ 7] is an XML-based policy language and was meant to serve as a standard general-purpose access control language ideally suitable to the needs of most authorization systems. In the following section, we provide a comparison of these languages in terms of whether they are suited for eLearning scenarios. ### 5.1 Comparison of Policy Frameworks The number and variety of policy languages proposed so far is justified by the different requirements they had to accomplish and the different use cases they were designed to support. In the following, we compare the different features offered by the main policy languages in detail (see Table 2 for a summary). Table 2. Different Policy Languages and Their Features • Negotiations [ 6] adopt a broad notion of "negotiation," namely a negotiation is supposed to happen between two peers whenever 1) both peers are allowed to define a policy and 2) both policies are taken into account when processing a request. According to this definition, WSPL would support negotiations as well. However, for the sophisticated negotiations we talk about in this paper, we need to adopt a narrower definition of "negotiation" by adding a third prerequisite stating that, and 3) the evaluation of the request must be distributed, i.e., both peers must locally evaluate the request and either decide to terminate the negotiation or send a partial result to the other peer who will go on with the evaluation. Whether the evaluation is local or distributed may be considered an implementation issue, as long as policies are freely disclosable. Under a conceptual point of view, distributed evaluation is required as soon as the need for keeping policies private arises: if policies were not be private, simply merging the peers' policies would reveal possible compatibilities between them. This is not the case in lifelong learning, where policies may be sensitive. Imagine, Alice states that her friends can contact her via instant messenger in her leisure time but not any of her business contacts: most probably Alice does not want her business contacts to see this policy. • Evaluation. In languages that support negotiations, policy evaluation is distributed: at each step in the negotiation process, a peer sends the other one information that (possibly) lets the negotiation advance. However, each peer can terminate the negotiation without successful completion at any time. The evaluation is supposed to be performed locally by languages that do not support negotiations, although some of them may allow to split policies in several nodes and provide some means for collecting all fragments before (or during) the evaluation. Finally, some languages like Ponder neither support distributed evaluation nor distributed policies [ 17]. • Delegation is often used in access control systems to cater for temporary transfer of access rights to agents acting on behalf of other ones (e.g., passing write rights to a printer spooler in order to print a file). The right of delegating is a right as well and as such can also be delegated. Some languages provide a means for cascaded delegations up to a certain length (1 in Ponder, 2 in Rei): such languages provide a specific built-in construct in order to support delegation. Protune does not provide high-level constructs to deal with delegation but simulate them by exploiting more fine-grained features of the language: this has the remarkable side effect of allowing to explicitly set the desired length of a delegation chain (as well as other properties of the delegation). Delegation (of authority) can be expressed in PeerTrust as well, whereas KAoS, WSPL, and XACML do not support delegation. As shown in Table 2, Ponder, Rei, PeerTrust, and Protune support delegation, but only PeerTrust and Protune also allow for negotiations and both strong and lightweight evidences. However, Protune is also the only policy language supporting advanced explanation mechanisms and seems to be one of the most complete languages available (as it has been pointed out in [ 18]). On the other hand, Protune assumes by default that resources are private, therefore not allowing for the specification of negative authorizations, which is a feature supported by other frameworks like Rei or KAoS. However, Protune does not only allow for distributed evaluation of policies (thereby, allowing policies to be kept private), but also sufficient open source implementations are available, making this language easily accessible, usable, and extendable. For these reasons and since negative authorization policies are not necessarily needed in our application scenario, we decided to exploit Protune in our implementation. In the following, a brief overview of the Protune language is provided as well as the description of its application to the scenario described in Section 3. ### 5.2 Protune Language The previous section provided a description of the most prominent policy languages defined up to the date of creation of this paper. In this section, we introduce the Protune policy language, which seems to be the most suitable policy language for implementing our scenario. It is important to note that even though we provide a detailed description of the Protune language and its reasoning process, users will not be requested to specify their policies in a rule-based logic language such as Protune. In contrary, users will be able to select and instantiate existing policies from a standard library 5 or, for advanced users and administrators, appropriate tools for the specification of new policies will be provided. In fact, most of the policy languages presented in Section 5 provide management editors that help users and administrators create and manage their policies. The Protune policy language is based on regular logic program rules [ 19] of the following form: Figure which, according to the standard Logic Programming notation, can be written as Figure$A_{1}, \ldots, A_{m}$represent standard logical atoms (called the heads of the rules) and$L_{1}, \ldots, L_{n}$(the bodies of the rules) are literals, that is,$L_{i}$is equal to either$B_{i}$or$not\;B_{i}$, for some logical atom$B_{i}$. In the following, we present a simple rule that Sam evaluates each morning before leaving home: Figure The intended meaning is given as follows: Sam is ready to leave home if he has his keys, wallet, mobile phone, watch, transponder, and a tissue. The main drawback of this rule is that it only applies to Sam, and in case Tom also needs to perform the same check before leaving home, a new rule has to be defined for him. Fortunately, Logic Programming allows one to parametrize atoms, so that they can refer to entities not known in advance, like in the following example: Figure In this example,$X$represents a variable (which may refer to whichever entity) and the intended meaning is given as follows: some entity is ready to leave home if the same entity has its keys, wallet, mobile phone, watch, transponder, and a tissue. If the variable$X$is unified with (i.e., replaced by)$Sam\hbox{''}$, the rule above is semantically equivalent to the aforementioned$samReadyToLeaveHome$. Finally, notice that rules can be arbitrarily nested. For instance, the rule$readyToLeaveHome(X)$states that an entity is ready to leave home if (among other things) it has its keys. This may in turn mean that the same entity must have the main entrance key, the back entrance key, and the post box key, as it is stated in the following rule: Figure Rules can be reasoned over, i.e., rules can be exploited to check whether some statement holds (i.e., true). During the reasoning process, all applicable rules are checked until either one of them is successfully evaluated (thereby, providing a proof that the original statement holds) or no more rules are applicable (thereby, providing a proof that the original statement does not hold). In the following, we sketch the reasoning process for checking whether it holds that$Sam\hbox{''}$is ready to leave home (i.e., whether the goal$readyToLeaveHome(Sam\hbox{''})$holds). Figure In addition to usual Logic Programming-based languages, Protune provides policy-oriented features like support to actions, evidences, and metapredicates. • Actions. Protune allows one to specify actions within a policy: typical examples of actions are: sending evidences, accessing legacy systems (e.g., a database), or environmental properties (e.g., time). Actions are represented as usual predicates (called provisional predicates). Provisional predicates hold if they have been successfully executed. • Evidences. Protune allows one to refer to strong and lightweight evidences (i.e., credentials and declarations) from within a policy. Evidences can be regarded as a set of property-value pairs associated to an identifier. Each property-value pair is represented according to an object oriented-like dot notation$id.property:value$. • Metapredicates. Protune allows one to define properties of predicates, i.e., predicates about predicates. These predicates are called metapredicates and are associated with property-value pairs. They are represented through a notation close to the one used for evidences, namely$predicate \rightarrow\!\!property:value$. Rules containing metapredicates are called metarules. Metarules are typically exploited to assert some information about predicates occurring in a policy, e.g., the type of the predicate (property type, e.g., provisional) or some directives for the verbalization of the predicate, which are meant to be used by the explanation facility (property explanation). Some properties apply only to provisional predicates: the value of the property ontology is the identifier of the action associated to the provisional predicate as reported in some ontology in order to achieve a common understanding of an action among several parties. The property actor (respectively, execution) specifies which peer (either the requester or the provider) should carry out the action (respectively, when the action should be performed). In the remainder of this section, we exploit a policy fragment in order to introduce how Protune policies look like and which is the interplay between Protune rules and metarules. We revisit this policy fragment in Section 5.3. Figure This policy fragment contains a rule 1 and three metarules (from 2 to 4). The rule states that the predicate$is\_colleague$holds if each literal in the body of the rule holds. The intuition behind that rule is that a credential is required from the communicating party. The property fields of this credential need to have specific values. For example, the type of the credential needs to be$employee$, meaning that the credential states that the owner is a member of a certain company, in our case$\hbox{}SomeCompany\hbox{''}$.$Name$is a variable and should be the same name as stated in the rule head 1. This policy may for example serve Alice's learning client in order to find out if someone she communicates with is really working for her university or just pretending: • Metarules 2 and 3 state that predicates$credential$but not$is\_colleague$is a provisional predicate (i.e., represents an action). • Metarule 4 states that the action associated to$credential$must be performed by the other peer. In the following, we assume that provisional predicate$credential$is associated to the action of sending a credential to the other peer. Assuming that we want to check whether Bob is a colleague, the policy fragment will be evaluated against the goal$is\_colleague(Bob\hbox{''})$as follows: • line 1.1 checks whether a credential$cred$has been sent by the other peer. If it is the case, the evaluation proceeds; otherwise, a failure is reported, • the lines from 1.2 to 1.4 check whether the values of the properties of$cred$correspond to the ones listed in the body of the rule. If it is the case, the evaluation proceeds; otherwise, a failure is reported. ### 5.3 Motivation Scenario (Revisited) So far, we provided the description of all the properties a policy language must provide in order to address our scenario. Further, we described the language we chose, i.e., Protune. In this section, we model the scenario introduced in Section 2 by formalizing Alice's policies in the Protune language. We further present how the introduction of these policies and their evaluation cater important benefits for Alice, namely explanations and negotiations. In our running example, Alice needs to specify that during work time her chat facility is only allowed to accept incoming messages from business contacts and other employees of her company. Intuitively, this policy states that chats are allowed for business contacts as well as for colleagues at working time (first and second rules) and for all other people only at leisure time. Figure Further,$working\_time$,$leisure\_time$,$is\_business\_contact$, and$is\_colleague\$ may be defined as
Figure
Especially important in this example is that by using Protune, explanations are available. In both policy excerpts, the last rules are metarules describing how to explain the corresponding predicate (the symbol & refers to string concatenation).
If we focus on the aspect, how Protune supports explanations, we can have a look at the last rules in the previous policy. Those metarules describe how to explain the corresponding predicate. For example: If Bob, who is a friend of Alice, tries to contact Alice during her working time, the following explanation will be automatically generated from the specified policy [ 13]:
Figure
In this explanation, the statements that are true and do not depend on the requester are hidden. Hence, the explanation is focused on the conditions that are not fulfilled and not crowded with conditions that are (possibly trivially) true. However, full explanations providing both fulfilled and not fulfilled conditions can be generated on demand. In addition, clicking on [details] in a line provides a new explanation for the concept described in that line. For example, clicking on the last [details] link would yield an explanation, what exactly is meant by leisure time, i.e., which time frame Alice considers leisure time.
Alice might also have the following policy protecting her credit card when trying to access online resources at different learning resource providers. Intuitively, this policy states that the credit card can be released to trusted parties and that a trusted party is a party providing a BBB credential.
Figure
The provider she is contacting has the following policy specifying the payment conditions:
Figure
Let us assume that Alice finds a course she is interested in and requests access to it. As soon as this request is notified to the content provider, a dynamic negotiation is initiated by the policies defined by Alice and the provider (depicted in Fig. 2). This negotiation is intended to satisfy the policies of the communicating parties in an iterative way and enable online interaction.
## Implementation
This section briefly presents the Protune policy framework as well as its main components. It also introduces an online demonstration of some of the features described in this paper applied to an eLearning scenario. Finally, it evaluates the performance of the Protune framework.
### 6.1 Protune Policy Framework
The Protune framework [ 4] aims at combining distributed trust management policies with provisional-style business rules and access control-related actions. Protune's rule language extends two previous languages: PAPL [ 20], which until 2002 was one of the most complete policy languages for policy-driven negotiation, and PeerTrust [ 2], which supports distributed credentials and a more flexible policy protection mechanism. In addition, the framework features a powerful declarative meta-language for driving some critical negotiation decisions, and integrity constraints for monitoring negotiations and credential disclosure.
Protune provides a framework with
• a trust management language supporting general provisional-style 6 actions (possibly user-defined);
• an extensible declarative meta-language for driving decisions about request formulation, information disclosure, and distributed credential collection;
• a parametrized negotiation procedure, which gives a semantics to the meta-language and provably satisfies some desirable properties for all possible metapolicies;
• integrity constraints for negotiation monitoring and disclosure control;
• general, ontology-based techniques for importing and exporting metapolicies and for smoothly integrating language extensions;
• advanced policy explanations 7 in order to answer why, why-not, how-to, and what-if queries [ 13].
The Protune policy framework offers a high flexibility for specifying any kind of policy, integrates external systems at the policy level, and provides facilities for increasing user awareness, like, for example, natural language explanations of the policies. It is entirely developed in Java, which permits its integration into Web environments as an applet (without requiring the installation of any additional software). Fig. 3 depicts the high level architecture of a Protune Agent. It is composed by the following modules:
Figure Fig. 3. Policy framework architecture.
• Communication Interface. It is in charge of the communication with other parties. Some examples of possible interfaces are secure socket connections or web services.
• Internal Java API. This API can be used by Java programs in order to integrate the policy framework's functionalities.
• Policy Engine Distributor. In case more than one policy engine exist, this component is in charge of forwarding any request to the appropriate one.
• Policy Engine. Specific policy engine in charge of processing requests. Currently, a Protune engine and a PeerTrust engine are implemented.
• Credential Selection and Termination Algorithm. This is a pluggable component that specifies the general negotiation strategy of the agent (see previous sections for a more detailed description).
• Inference Engine. It is in charge of checking whether the (local) policy has been fulfilled, as well as of other evaluation processes like extracting from the local policy (respectively from the policy of the other peer) the actions the current peer wants to execute (respectively the actions the other peer wants the current one to execute).
• Execution Handler. Responsible for executing actions and package calls specified in the policies.
• Credential Repository. This package is in charge of loading the local credentials, providing them when required and checking that received credentials during a negotiation are not forged.
• RDBMS. This package is in charge of executing database queries to a relational database.
• File System. This package is in charge of executing queries based on regular expressions on specified files in a file system.
### 6.2 Online Demonstration
As part of our investigation, we have developed an online demonstration as a proof-of-concept of the feasibility of using policies for advanced eLearning scenarios. Since our goal was to show the interactions in which policies are involved, this demonstration adds actual policy-based reasoning and interactions to a learning management system scenario. The demo 8 is available at http://policy.l3s. uni-hannover.de:9080/policyFramework/elearning/.
It is important to note that the personalization and all the policy interactions are perfectly integrated in the demonstration and performed at runtime (and can even be modified by the user accessing the demonstration). On the other hand, the learning management system is just a mock-up and does not provide real functionality. For example, there is no actual instant messenger or file sharing components integrated in the demo, since it is out of the scope of this paper.
The demonstration site downloads an applet to the user's computer in order to provide the reasoning and negotiation capabilities required for the advanced interactions with the learning management system. Once loaded, the page shows the following elements (see also Fig. 4):
Figure Fig. 4. Demonstration of the integration of a policy framework architecture in an eLearning scenario.
• a personalized greeting message;
• a list of the files shared by some colleagues;
• an instant messenger with the available online contacts;
• a learning agent, which shows some messages to the learner;
• a list of books related to the displayed lesson, which the learner may be interested in.
Each component allows the user visiting the demonstration site to perform several interactions:
• greeting message. Before the content is generated, the server requires the user to identify herself. Depending on the answer of the user, her name or a request to register will appear.
• file sharing. When clicking on any of the files, a negotiation with the server (emulating a negotiation with the other learner) takes place in order to allow access to the file. Only learners holding and disclosing a student card from the learning management system can retrieve the files. Otherwise, an appropriate explanation is shown.
• instant messenger. If there is a request for chatting with any of the online contacts, a request (possibly involving a negotiation) is performed in order to find out whether the other party is accepting chats at that very moment. Otherwise, an appropriate explanation is shown.
• main pane with lesson. Before delivering the content for the Java course, the server requests the client to provide a certification of her advanced knowledge in the topic. If that certificate is provided, the server delivers a "Java Programming for experts" unit. Otherwise, it delivers the "Java Programming for beginners" unit.
• learning agent. Depending on the identity of the learner, the agent will show different personalized messages oriented to helping the learner.
• book list. Depending on the level of knowledge of the learner (e.g., whether the learner has provided a certification of her advanced knowledge of Java), the list will contain different books, either for Java beginners or experts. In addition, if the user clicks on any of them, a negotiation will be performed in order to check whether the learner has a subscription with the publisher or whether she buys the book by providing a valid credit card. Otherwise, an appropriate explanation is shown.
The demo is initialized in a way that the learner automatically discloses for instance her identity ("Alice") and the certification for advanced Java. However, this behavior can be changed by opening the policy editor (a link is provided in the site) and commenting or uncommenting the available policies. Those changes will have as consequence that the interactions with the learning management system will, at runtime, generate different results.
In summary, the online demonstration shows that by using policies we enhance a static learning environment with flexibility, dynamicity, and interoperability: each user is allowed to adopt the behavior best suiting her needs and the adoption of the system can happen at runtime.
### 6.3 Evaluation of the Protune Architecture
For systems as described in this paper, the performance of the evaluation process of policies is a very important aspect. Even, if policies offer manifold possibilities, it is unlikely that users accept extraordinary long response times. The evaluation process is done by the policy engine, which is the heart of every policy integration. The policy engine operates on a specific set of policies to perform a logic reasoning process, resulting in an answer to the given query.
In order to evaluate the response time required by the policy engine and to observe the impact of its implementation into a productive environment, we assumed requests to a policy engine, running on a user's computer. Therefore, we used a laptop using an Intel Core 2 Duo CPU T7300 with each 2.00 GHz and 2,030 Mbytes of RAM. The simulated user has a file containing the policies that will be loaded in the engine. We varied the number of policy rules in the file, assuming a minimum set of 100 policy rules, which already represents a very advanced and sophisticated behavior of the user's system driven by policies. Additionally, during the test, we increased the amount of rules subsequently in steps of hundreds up to 800, an amount that we consider to be already an extraordinary big set of rules for a user but probably a regular amount for a server. Within the rule set, we also assumed a moderate dimension of complexity of the policy rules and also of connections between rules, that is, forcing the reasoning process to always have to evaluate several (possibly nested) rules. As depicted in Fig. 5, the policy engine scales to the amount of rules in the policy file and the response time does not seem to increase.
Figure Fig. 5. Response times of the policy engine for different amounts of policy rules.
The response time to one policy request is on average slightly above 200 ms (on a nonoptimized version of the Protune engine with debugging enabled). Moreover, Coi et al. [ 21] investigates some possibilities to preevaluate policy-based decisions in order to reduce response times in most cases to less than 10 ms per request.
## Related Work
To the best of our knowledge, using policy-based behavior control in technology-enhanced learning environments has not been extensively researched. An approach aiming at federated access control in Web-service-based repositories is presented in [ 22]. In order to allow for an appropriate access control, the policy language XACML and the federated trust framework Shibboleth have been extended and integrated into an ECL middleware. In this framework, policies are based on a simple attribute directory service.
In LionShare [ 10], a similar approach is used exploiting Shibboleth. Security is provided by the so-called Access Control Lists expressed in XACML. These lists define which user can access which file depending on the user's properties, such as the membership of a certain faculty. However, none of these approaches allows for expressive access control supporting, e.g., action executions, negotiations, or explanations, and therefore, they do not meet the requirements identified in our scenario. Furthermore, using Shibboleth implies the existence of institutions users belong to, which is an assumption that does not apply in an open scenario for lifelong learning.
El-Khatib et al. [ 23] provides an abstract overview on privacy and security issues in advanced learning technologies, suggesting that policies suit well security purposes in an educational context. Besides the fact that, in our work, policies are applied more generally and not only to security, the work in [ 23] provides neither scenarios nor specific details about the usage of policies.
Lin and Lin [ 24] deals with policies based on the Ponder policy language within the scope of collaborative eLearning systems. The use of policies in such a framework is basically restricted to role-based access control and, therefore, does not match the needs of an open learning environment as described above.
The PRIME project [ 25] aims at developing a prototype of a privacy-enhancing identity management system. The project includes some scenarios demonstrating the applicability of the developed prototype, where, among others, a learning environment setting is addressed. PRIME makes use of several existing languages and protocols like, e.g., EPAL, XrML, P3P, and the policy language XACML. The main focus of its learning scenario is obscuring the identity of the participating members in their different roles. However, the PRIME project obviously does not cover functionality provided by using a policy-based system as we suggest in this paper.
Another eLearning project worth mentioning here is SeLeNe [ 26]. SeLeNe finished in 2004 and dealt with metadata of learning objects. One part of the project is highly related to this paper and covers the reactivity of learning object metadata [ 27]. One example application is the automatic notification on modifications of learning objects and the registration of a new user in a network showing an interesting (i.e., matching) profile. These reactivity features offer a complex change detection mechanism and are based on the so-called Event-Condition-Action Rules (ECA-Rules) defined on RDF. An ECA-Rule is a special kind of policy. SeLeNe discusses the basis of some of the features we require in our scenario (such as automatic notification via chat). However, the usage of policies we suggest in this paper allows for more complex conditions (not only based on RDF query languages) and actions (not only notifications) since we provide (among others) negotiations and arbitrary actions that are not part of the SeLeNe change detection framework.
Moreover, none of the approaches described above provides user awareness capabilities by, for instance, generating natural language explanations that may help the learners understand the policies and decisions.
## Conclusionsand Further Work
Open lifelong learning environments require flexible and interoperable approaches that are easy to use and to personalize by learners and tutors. This paper has described a scenario with advanced interactions among learners, tutors, agents, and the learning management system. It also showed how policies can address the requirements extracted from such a scenario and provide benefits not only in flexibility and dynamicity but also additional features like reasoning and interoperability. This paper has given an overview of existing policy frameworks and compared them according to a list of requirements previously identified. One of the policy languages was selected and used in order to specify policies that may be used at runtime to, e.g., control access to resources, perform negotiations, or generate explanations. Finally, we integrated the policy framework into a Web-based online demonstration as a proof-of-concept of the feasibility of our approach, demonstrating the concepts described throughout this paper, and evaluated the performance of the policy evaluation process in terms of response times.
In our future work, we will investigate the integration of policies into existing eLearning systems (see [ 28]) such as Moodle [ 29] or ILIAS [ 30] but also into Adaptive Educational Hypermedia Systems such as AHA! [ 31]. We also plan to create more tooling support for the management of policies by nonexpert users. Furthermore, Protune does not currently support reactivity (ECA-Rules) and we plan to explore such extensions to the language and framework in order to increase the number of possible scenarios it can address.
## ACKNOWLEDGMENTS
The authors' efforts were (partly) funded by the European Commission in the TENCompetence Project (IST-2004-02787; http://www.tencompetence.org).
## REFERENCES
• 1. A. Uszok, J. Bradshaw, R. Jeffers, N. Suri, P. Hayes, M. Breedy, L. Bunch, M. Johnson, S. Kulkarni, and J. Lott, "Kaos Policy and Domain Services: Toward a Description-Logic Approach to Policy Representation, Deconfliction, and Enforcement," Proc. Fourth IEEE Int'l Workshop Policies for Distributed Systems and Networks (POLICY '03), p. 93, 2003.
• 2. R. Gavriloaie, W. Nejdl, D. Olmedilla, K.E. Seamons, and M. Winslett, "No Registration Needed: How to Use Declarative Policies and Negotiation to Access Sensitive Resources on the Semantic Web," Proc. First European Semantic Web Symp. (ESWS '04), vol. 3053, pp. 342-356, May 2004.
• 3. N. Damianou, N. Dulay, E. Lupu, and M. Sloman, "The Ponder Policy Specification Language," Proc. Int'l Workshop Policies for Distributed Systems and Networks (POLICY '01), pp. 18-38, 2001.
• 4. P. Bonatti, and D. Olmedilla, "Driving and Monitoring Provisional Trust Negotiation with Metapolicies," Proc. Sixth IEEE Int'l Workshop Policies for Distributed Systems and Networks (POLICY '05), pp. 14-23, 2005.
• 5. L. Kagal, T.W. Finin, and A. Joshi, "A Policy Language for a Pervasive Computing Environment," Proc. Fourth IEEE Int'l Workshop Policies for Distributed Systems and Networks (POLICY '03), p. 63, June 2003.
• 6. A.H. Anderson, "An Introduction to the Web Services Policy Language (WSPL)," Proc. Fifth IEEE Int'l Workshop Policies for Distributed Systems and Networks (POLICY '04), p. 189, 2004.
• 7. OASIS eXtensible Access Control Markup Language, http://www.oasis-open.org/specs/index.php\#xacmlv2.0.
• 8. J.L.D. Coi, P. Kärger, A.W. Koesling, and D. Olmedilla, "Exploiting Policies in an Open Infrastructure for Lifelong Learning," Proc. Second European Conf. Technology Enhanced Learning (EC-TEL '07), vol. 4753, pp. 26-40, Sept. 2007.
• 9. W. Nejdl, B. Wolf, C. Qu, S. Decker, M. Sintek, A. Naeve, M. Nilsson, M. Palmer, and T. Risch, Edutella: A P2P Networking Infrastructure Based on RDF, citeseer.ist.psu.edu/boris01edutella.html, 2001.
• 10. The Lionshare Project, http://lionshare.its.psu.edu/, 2008.
• 11. T. Nabeth, A.A. Angehrn, P.K. Mittal, and C. Roda, "Using Artificial Agents to Stimulate Participation in Virtual Communities," Proc. IADIS Int'l Conf. Cognition and Exploratory Learning in Digital Age (CELDA '05), Kinshuk, D.G. Sampson, and P.T. Isaas, eds., pp. 391-394, http://dblp.uni-trier.de/db/conf/iadis/celda2005.html#NabethAMR05, 2005.
• 12. F. Abel, E. Herder, P. Kärger, D. Olmedilla, and W. Siberski, "Exploiting Preference Queries for Searching Learning Resources," Proc. Second European Conf. Technology Enhanced Learning (EC-TEL '07), vol. 4753, pp. 143-157, Sept. 2007.
• 13. P.A. Bonatti, D. Olmedilla, and J. Peer, "Advanced Policy Explanations on the Web," Proc. 17th European Conf. Artificial Intelligence (ECAI '06), pp. 200-204, Aug./Sept. 2006.
• 14. P. Bonatti, and P. Samarati, "Regulating Service Access and Information Release on the Web," Proc. Seventh ACM Conf. Computer and Comm. Security (CCS '00), pp. 134-143, 2000.
• 15. W. Winsborough, K. Seamons, and V. Jones, "Automated Trust Negotiation," Technical Report TR-2000-05, DARPA, citeseer. ist.psu.edu/article/winsborough00automated.html, 2000.
• 16. J.S. Brown, and P. Duguid, "Adaptive and Intelligent Web-Based Educational Systems," Int'l J. Artificial Intelligence in Education, vol. 13, pp. 156-169, 2003.
• 17. T. Yu, M. Winslett, and K.E. Seamons, "Interoperable Strategies in Automated Trust Negotiation," Proc. Eighth ACM Conf. Computer and Comm. Security (CCS '01), pp. 146-155, 2001.
• 18. C. Duma, A. Herzog, and N. Shahmehri, "Privacy in the Semantic Web: What Policy Languages Have to Offer," Proc. Eighth IEEE Int'l Workshop Policies for Distributed Systems and Networks (POLICY '07), pp. 109-118, 2007.
• 19. J.W. Lloyd, Foundations of Logic Programming. Springer-Verlag, 1984.
• 20. P.A. Bonatti, and P. Samarati, "Regulating Service Access and Information Release on the Web," Proc. Seventh ACM Conf. Computer and Comm. Security (CCS '00), pp. 134-143, 2000.
• 21. J.L.D. Coi, E. Ioannou, A. Koesling, and D. Olmedilla, "Access Control for Sharing Semantic Data across Desktops," Proc. First Int'l Workshop Privacy Enforcement and Accountability with Semantics (PEAS '07), Nov. 2007.
• 22. M. Hatala, T.M. Eap, and A. Shah, "Unlocking Repositories: Federated Security Solution for Attribute and Policy Based Access to Repositories via Web Services," Proc. First Int'l Conf. Availability, Reliability and Security (ARES '06), pp. 895-903, 2006.
• 23. K. El-Khatib, L. Korba, Y. Xu, and G. Yee, "Privacy and Security in E-Learning," Int'l J. Distance Education, vol. 1, no. 4, 2003.
• 24. Y. Lin, and Lin, "Policy-Based Privacy and Security Management for Collaborative E-Education Systems," Proc. Fifth IASTED Multi-Conf. Computers and Advanced Technology in Education (CATE), 2002.
• 25. PRIME: Privacy and Identity Management for Europe, https://www.prime-project.eu, 2008.
• 26. SeLeNe: Self eLearning Networks, https://www.prime-project.eu, 2008.
• 27. G. Papamarkos, A. Poulovassilis, and P.T. Wood, Eca Rule Languages for Active Self e-Learning Networks, seLeNe Project Deliverable 4.4, 2003.
• 28. A.W. Koesling, E. Herder, and D. Krause, "Flexible Adaptivity in AEHS Using Policies," Proc. Conf. Adaptive Hypermedia and Adaptive Web-Based Systems (AH '08), July 2008.
• 29. Moodle: Moodle Open Source eLearning System, http://moodle.org/, 2008.
• 30. ILIAS: Ilias Open Source eLearning System, http://www.ilias.de/, 2008.
• 31. AHA!: Adaptive Hypermedia for All, http://aha.win.tue.nl/, 2008.
|
{}
|
## USACO 2019 US Open Contest, Bronze
Contest has ended.
A fire has broken out on the farm, and the cows are rushing to try and put it out!
The farm is described by a $10 \times 10$ grid of characters like this:
..........
..........
..........
..B.......
..........
.....R....
..........
..........
.....L....
..........
The character 'B' represents the barn, which has just caught on fire. The 'L' character represents a lake, and 'R' represents the location of a large rock.
The cows want to form a "bucket brigade" by placing themselves along a path between the lake and the barn so that they can pass buckets of water along the path to help extinguish the fire. A bucket can move between cows if they are immediately adjacent in the north, south, east, or west directions. The same is true for a cow next to the lake --- the cow can only extract a bucket of water from the lake if she is immediately adjacent to the lake. Similarly, a cow can only throw a bucket of water on the barn if she is immediately adjacent to the barn.
A cow cannot be placed on the square containing the large rock, and the barn and lake are guaranteed not to be immediately adjacent to each-other.
#### INPUT FORMAT (file buckets.in):
The input file contains 10 rows each with 10 characters, describing the layout of the farm.
#### OUTPUT FORMAT (file buckets.out):
Output a single integer giving the minimum number of cows needed to form a viable bucket brigade.
#### SAMPLE INPUT:
..........
..........
..........
..B.......
..........
.....R....
..........
..........
.....L....
..........
#### SAMPLE OUTPUT:
7
In this example, here is one possible solution, which involves the optimal number of cows (7):
..........
..........
..........
..B.......
..C.......
..CC.R....
...CCC....
.....C....
.....L....
..........
Problem credits: Brian Dean
Contest has ended. No further submissions allowed.
|
{}
|
Primary: 92D25, 92D30; Secondary: 92B99.
Export file:
Format
• RIS(for EndNote,Reference Manager,ProCite)
• BibTex
• Text
Content
• Citation Only
• Citation and Abstract
Modeling the role of healthcare access inequalities in epidemic outcomes
1. Harvard T.H. Chan School of Public Health, Department of Biostatistics, Boston, MA
2. SAL MCMSC, School of Human Evolution and Social Change, Arizona State University, Tempe, AZ
3. School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ
## Abstract Related pages
Urban areas, with large and dense populations, offer conditions that favor the emergence and spread of certain infectious diseases. One common feature of urban populations is the existence of large socioeconomic inequalities which are often mirrored by disparities in access to healthcare. Recent empirical evidence suggests that higher levels of socioeconomic inequalities are associated with worsened public health outcomes, including higher rates of sexually transmitted diseases (STD's) and lower life expectancy. However, the reasons for these associations are still speculative. Here we formulate a mathematical model to study the effect of healthcare disparities on the spread of an infectious disease that does not confer lasting immunity, such as is true of certain STD's. Using a simple epidemic model of a population divided into two groups that differ in their recovery rates due to different levels of access to healthcare, we find that both the basic reproductive number ($\mathcal{R}_{0}$) of the disease and its endemic prevalence are increasing functions of the disparity between the two groups, in agreement with empirical evidence. Unexpectedly, this can be true even when the fraction of the population with better access to healthcare is increased if this is offset by reduced access within the disadvantaged group. Extending our model to more than two groups with different levels of access to healthcare, we find that increasing the variance of recovery rates among groups, while keeping the mean recovery rate constant, also increases $\mathcal{R}_{0}$ and disease prevalence. In addition, we show that these conclusions are sensitive to how we quantify the inequalities in our model, underscoring the importance of basing analyses on appropriate measures of inequalities. These insights shed light on the possible impact that increasing levels of inequalities in healthcare access can have on epidemic outcomes, while offering plausible explanations for the observed empirical patterns.
Figure/Table
Supplementary
Article Metrics
Citation: Oscar Patterson-Lomba, Muntaser Safan, Sherry Towers, Jay Taylor. Modeling the role of healthcare access inequalities in epidemic outcomes. Mathematical Biosciences and Engineering, 2016, 13(5): 1011-1041. doi: 10.3934/mbe.2016028
References
• 1. The Lancet Infectious Diseases, 11 (2011), 131-141.
• 2. $2^{nd}$ edition, Pearson Education, New Jersey, 2003.
• 3. Journal of Theoretical Biology, 215 (2002), 227-237.
• 4. Current Trends in Technology and Sciences, 2 (2013), 253-257.
• 5. Mathematical Biosciences, 96 (1989), 221-238.
• 6. Springer, 2012.
• 7. SIAM Journal on Applied Mathematics, 56 (1996), 494-508.
• 8. Journal of Mathematical Biology, 35 (1997), 503-522.
• 9. SIAM Journal on Applied Mathematics, 59 (1999), 1790-1811.
• 10. Math. Biosci. Eng., 1 (2004), 361-404.
• 11. National Bureau of Economic Research, 2014.
• 12. Mathematical and Theoretical Biology Institute archive, 2007.
• 13. 2010. Available from: http://epc2010.princeton.edu/papers/100012.
• 14. Science, American Association for the Advancement of Science, 319 (2008), 766-769.
• 15. Lancet, 383 (2014).
• 16. Sexually Transmitted Diseases, 24 (1997), 327-333.
• 17. Sexually Transmitted Diseases, 29 (2002), 13-19.
• 18. Journal of Urban Health, 79 (2002), S1-S12.
• 19. Social Science & Medicine, 60 (2005), 1017-1033.
• 20. PLoS Pathogens, 10 (2014).
• 21. Progress in Development Studies, 1 (2001), 113-137.
• 22. Springer, Berlin, 1984.
• 23. Sexually Transmitted Infections, 79 (2003), 62-64.
• 24. Princeton University Press, 2008.
• 25. Mathematical Biosciences, 147 (1998), 207-226.
• 26. International Journal of Epidemiology, 37 (2008), 4-8.
• 27. Journal of Mathematical Biology, 47 (2003), 547-568.
• 28. Science, 300 (2003), 1966-1970.
• 29. Theoretical Population Biology, 60 (2001). 59-71.
• 30. Proceedings of the Royal Society of London. Series B: Biological Sciences, 268 (2011), 985-993.
• 31. Nature, 438 (2005), 355-359.
• 32. Social Science & Medicine, 68 (2009), 2240-2246.
• 33. The Lancet, 365 (2005), 1099-1104.
• 34. PLoS Pathogens, 9 (2013), e1003467.
• 35. preprint, arXiv:1310.1648.
• 36. Journal of Biological Dynamics, 4 (2010), 456-477.
• 37. Sexually Transmitted Infections,91 (2015), 610-614.
• 38. London: Allen Lane, 2009.
• 39. The Quarterly Journal of Economics, 131 (2016), 519-578.
• 40. Environment and Urbanization, 8 (1996), 9-30.
• 42. Math. Biosci., 180 (2002), 29-48.
• 43. 2012. Available from: http://www.un.org/en/development/desa/publications/world-urbanization-prospects-the-2011-revision.html.
• 44. Proceedings of the Royal Society B: Biological Sciences, 274 (2007), 599-604.
• 45. BMJ: British Medical Journal, 314 (1997), 591-595.
• 46. Social Science & Medicine, 62 (2006), 1768-1784.
• 47. Mathematical Biosciences, 211 (2008), 166-185.
|
{}
|
## Intermediate Algebra for College Students (7th Edition)
$a$
By definition, for any real number $a$, $\sqrt[3]{a^3} = a$.
|
{}
|
# OEF sequences --- Introduction ---
This module actually contains 16 exercises on infinite sequences: convergence, limit, recursive sequences, ...
### Two limits
Let () be an infinite sequence of real numbers. If one has
and for ,
what can be said about its convergence? (You should choose the most pertinent consequence.)
### Comparison of sequences
Let () and () be two sequences of real numbers where () converges towards . If one has
,
what can be said about the convergence of ()? (You must choose the most pertinent consequence.)
### Growth and bound
Let () be a sequence of real numbers. If () is , what can be said about its convergence (after its existence)?
### Convergence and difference of terms
Let be a sequence of real numbers. Among the following assertions, which are true, which are false?
1. If , then .
2. If , then .
### Convergence and ratio of terms
Let be a sequence of real numbers. Among the following assertions, which are true, which are false?
1. If , then .
2. If , then .
### Epsilon
Let be a sequence of real numbers. What does the condition
imply on the convergence of ? (You must choose the most pertinent consequence.)
### Fraction 2 terms
Compute the limit of the sequence (un), where
### Fraction 3 terms
Compute the limit of the sequence (un), where
### Fraction 3 terms II
Compute the limit of the sequence (un), where
WARNING IN this exercise, approximative replies will be considered as false! Type pi instead of 3.14159265, for example.
### Growth comparison
What is the nature of the sequence (un), where
?
### Monotony I
Study the growth, sup, inf, min, max of the sequence (un) for n , where
.
Write for a value that does not exist, and or - for + or -.
### Monotony II
Study the growth, sup, inf, min, max of the sequence (un) for n , where
.
Write for a value that does not exist, and or - for + or -.
### Powers I
Compute the limit of the sequence (un), where
### Powers II
Compute the limit of the sequence (un), where
Type no if the sequence is divergent.
### Recursive function
The sequence such that
is a recursive sequence defined by for a certain function . Find this function.
### Recursive limit
Find the limit of the recursive sequence such that
Other exercises on: sequences Convergence Limit
In order to access WIMS services, you need a browser supporting forms. In order to test the browser you are using, please type the word wims here: and press Enter''.
|
{}
|
Infoscience
Student project
# Effect of calcification on agarose gel stiffness and integration strength with Bone
The zone of calcified cartilage (ZCC) is critical for the normal attachment of articular cartilage to bone as well as to the biomimetic bioengineering of osteochondral tissue constructs. However, relatively few osteochondral tissue engineering approaches have created a tissue resembling and functioning like the ZCC. The implementation of a double diffusion system, wherein calcium (${Ca}^{2+}$) and phosphate (${{PO}_{4}}^{3-}$) ions are diffused toward each other, provides a method to induce local mineralization within a hydrogel. The objectives of the present study were to (1) estimate the diffusivity of ${Ca}^{2+}$ and ${{PO}_{4}}^{3-}$ within a 2% agarose gel, (2) characterize morphologically, chemically, and biomechanically the mineral structure formed with agarose using the double diffusion system, and (3) determine the feasibility of using the double diffusion system to create a mineral structure at the site of agarose attachment to subchondral bone (ScB), trabecular bone (TB), or porous titanium. The diffusion of Ca2+ and PO43- created calcified agarose consistent with the formation of hydroxyapatite (HA). From concentration profiles, the diffusion coefficients for ${Ca}^{2+}$ and ${{PO}_{4}}^{3-}$ in a 2% agarose gel were estimated to be ${6.4x10}^{-6}$ and ${1.3x10}^{-6}$ cm2/s, respectively. Using the double diffusion system, mineralization was visualized grossly as a broad precipitation band and by micro-CT scan as a toroidal structure The indentation stiffness of the gel was increased (+50%) to a peak coincident with the location of the peak precipitation band and chemical content of ${Ca}^{2+}$ and ${{PO}_{4}}^{3-}$. The integration strength between agarose and ScB (0.27 ± 0.02 N) was less than that between agarose and TB (0.73 ± 0.05 N). Application of the double diffusion did induced calcification locally at the targeted site of a porous titanium disc; however, it did not at either a ScB or TB target. These results may be applied to enhance formation of a biomimetic interface between hydrogel and a target porous rigid structure
Note:
Cartilage Tissue Engineering Laboratory, University of California, San Diego
#### Reference
• SSV-STUDENT-2009-057
Record created on 2009-12-07, modified on 2016-08-08
|
{}
|
# Advent of Code 2021 in Kotlin - Day 22
## Introduction
The Day 22 problem is the next example of problem that is strictly divided in two parts, that seems to be identical but requires much different solutions. In the first part we can start with naive implementation which is good as the considered space is limited but in the second part we have to come up with some smarter approach. Let’s see the idea behind and the cool implementation in Kotlin.
## Solution
For the first part we prepare straightforward solution which keeps all the cubes in space separately, as their number is limited by task description (i.e. it can be at most $101^2$). So for every Step we take care only about the Cubes from limited range and add them or remove from current collection.
However, in the second part this approach is too naive. That’s because the sizes of the added and removed cubes are really huge, so adding individual Cubes in space would take too much time and memory.
After some time of thinking about the solution, we can come up with the approach of inserting 3D ranges to reactor, so instead of keeping information about individual cubes, we keep the groups of them.
The hardest part of the solution is to implement the difference of Range3D that we represented as a triple of IntRange. To do that, we provided a few helper infix function that makes checking relative position of ranges easier. In my opinion, the hardest part was the proper implementation of operator fun IntRange.minus(r: IntRange) that is later used in operator fun Range3D.minus(r: Range3D). The main idea behind this approach is to divide the considered Range3Ds into 8 (or less) smaller pieces and check which of them are in the result Range3D.
### Day22.kt
import kotlin.math.max
import kotlin.math.min
override fun solve() {
val data = reads<String>() ?: return
val steps = data.map { it.toStep() }
LimitedReactor(limit = -50..50).apply { steps.forEach { execute(it) } }.size.printIt()
Reactor().apply { steps.forEach { execute(it) } }.size.printIt()
}
}
private fun String.toRange() = drop(2).split("..")
.map { it.toInt() }.let { (f, t) -> f..t }
private fun String.toStep() = split(" ").let { (a, r) ->
val (x, y, z) = r.split(",").map { it.toRange() }
Step(Action.valueOf(a.uppercase()), Range3D(x, y, z))
}
private infix fun IntRange.limit(l: IntRange?) = l?.let { max(first, l.first)..min(last, l.last) } ?: this
private enum class Action { ON, OFF }
private data class Step(val action: Action, val range: Range3D) {
fun cubes(l: IntRange? = null) = buildSet {
for (xi in range.x limit l) for (yi in range.y limit l)
for (zi in range.z limit l) add(Cube(xi, yi, zi))
}
}
private data class Cube(val x: Int, val y: Int, val z: Int)
private class LimitedReactor(private val limit: IntRange) {
private val on = hashSetOf<Cube>()
val size get() = on.size
fun execute(step: Step) = when (step.action) {
Action.ON -> on += step.cubes(limit)
Action.OFF -> on -= step.cubes(limit)
}
}
private infix fun IntRange.outside(r: IntRange) = last < r.first || first > r.last
private infix fun IntRange.inside(r: IntRange) = first >= r.first && last <= r.last
private val IntRange.size get() = last - first + 1
private operator fun IntRange.minus(r: IntRange): Sequence<IntRange> = when {
this inside r -> sequenceOf(this)
r inside this -> sequenceOf(first..r.first - 1, r, r.last + 1..last)
r outside this -> sequenceOf(this)
last < r.last -> sequenceOf(first..r.first - 1, r.first..last)
r.first < first -> sequenceOf(first..r.last, r.last + 1..last)
else -> error("Not defined minus for $this-$r")
}.filter { it.size > 0 }
private class Reactor {
private val on: HashSet<Range3D> = hashSetOf()
val size get() = on.sumOf { it.size }
fun execute(step: Step) = when (step.action) {
Action.OFF -> on.flatMap { it - step.range }.toHashSet().also { on.clear() }
Action.ON -> on.fold(hashSetOf(step.range)) { cut, curr -> cut.flatMap { it - curr }.toHashSet() }
}.let { on += it }
}
private data class Range3D(val x: IntRange, val y: IntRange, val z: IntRange) {
val size get() = x.size.toLong() * y.size.toLong() * z.size.toLong()
operator fun minus(r: Range3D): Sequence<Range3D> =
if (r outside this) sequenceOf(this)
else sequence {
for (x in x - r.x) for (y in y - r.y) for (z in z - r.z) yield(Range3D(x, y, z))
}.filter { it inside this && it outside r }
infix fun outside(r: Range3D) = x outside r.x || y outside r.y || z outside r.z
infix fun inside(r: Range3D) = x inside r.x && y inside r.y && z inside r.z
}
## Extra notes
The whole solution takes advantage of defining many infix and operator functions for ranges. Most of them are defined in order to get a simple way of calculating difference of many ranges.
When performing most of the operations on sets of ranges, we use the sequences to produce the values. That’s because there are many transformations done on these iterables so approach with sequences is preferred. Building the sequences is in Kotlin as easy as building collections with sequence { } builder or sequenceOf() function, so we definitely should consider using them in our code more frequently.
We haven’t mentioned yet in our discussions the getters' implementation in Kotlin. While usually we define the field values with immediate initialisation like
val someField: FieldType = calculatedSomeFieldValue()
it might be not a good approach in multiple situations because the calculatedSomeFieldValue function is called just on object initialisation.
One of the approaches here is to provide the getter implementation of the field, so it’s values will be calculated every time when the property is accessed. We can with simple expression definition like
val someField: FieldType get() = calculatedSomeFieldValue()
which can be also written as multiple statements, if some extra instructions are needed to calculate result like
val someField: FieldType get() {
val intermediateValue = calculatedSomeFieldValue()
return valueTransformation(intermediateValue)
}
In both of these cases, the function calculating the field value is called every time when the field is accessed. That’s may take a lot of resources so sometimes the lazy approach is definitely preferred. It can be used when the returned field value is known to be always the same, so it can be cached in delegated property. It’s enough to define such field as
val someField: FieldType by lazy { calculatedSomeFieldValue() }
Then, only at the first access of someField the calculatedSomeFieldValue is called. It’s pretty and short approach to get a really cool effect, so we should remember about it when defining the fields in our classes (especially when they depend on some objects' state) 🤞.
###### Student of Computer Science
My interests include robotics (mainly with Arduino), mobile development for Android (love Kotlin) and Java SE/EE applications development.
|
{}
|
## Algebra 1
B) $2x+7y=28$
Multiply the equation by 7 $7y=2x-28$ Subtract $2x$ from both sides $-2x+7y=-28$ Multiply the equation by $-1$ $2x-7y=28$
|
{}
|
# Changelog¶
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
## [Unreleased]¶
### Fixed¶
• imod.mf6.open_hds() did not read the appropriate bytes from the heads file, apart for the first timestep. It will now read the right records.
• Use the appropriate array for modflow6 timestep duration: the imod.mf6.Model.write() would write the timesteps multiplier in place of of the duration array.
• imod.util.to_ugrid2d() has been added to convert a (structured) xarray DataArray or Dataset to a quadrilateral UGRID dataset.
## [0.10.1] - 2020-10-19¶
### Changed¶
• imod.wq.SeawatModel.write() now generates iMOD-WQ runfiles with more intelligent use of the “macro tokens”. : is used exclusively for ranges; \$ is used to signify all layers. (This makes runfiles shorter, speeding up parsing, which takes a significant amount of time in the runfile to namefile conversion of iMOD-WQ.)
• Datetime formats are inferred based on length of the time string according to %Y%m%d%H%M%S; supported lengths 4 (year only) to 14 (full format string).
### Fixed¶
• IO methods for IDF files will now correctly identify double precision IDFs. The correct record length identifier is 2295 rather than 2296 (2296 was a typo in the iMOD manual).
• imod.wq.SeawatModel.write() will now write the correct path for recharge package concentration given in IDF files. It did not prepend the name of the package correctly (resulting in paths like concentration_l1.idf instead of rch/concentration_l1.idf).
• imod.idf.save() will simplify constant cellsize arrays to a scalar value – this greatly speeds up drawing in the iMOD-GUI.
## [0.10.0] - 2020-05-23¶
### Changed¶
• from_file() constructors have been added to all imod.wq.Package. This allows loading directly package from a netCDF file (or any file supported by xarray.open_dataset), or a path to a Zarr directory with suffix “.zarr” or “.zip”.
• This can be combined with the cache argument in from_file() to enable caching of answers to avoid repeated computation during imod.wq.SeawatModel.write(); it works by checking whether input and output files have changed.
• The resultdir_is_workspace argument has been added to imod.wq.SeawatModel.write(). iMOD-wq writes a number of files (e.g. list file) in the directory where the runfile is located. This results in mixing of input and output. By setting it True, all model output is written in the results directory.
• imod.visualize.imshow_topview() has been added to visualize a complete DataArray with atleast dimensions x and y; it dumps PNGs into a specified directory.
• Some support for 3D visualization has been added. imod.visualize.grid_3d() and imod.visualize.line_3d() have been added to produce pyvista meshes from xarray.DataArray’s and shapely polygons, respectively. imod.visualize.GridAnimation3D and imod.visualize.StaticGridAnimation3D have been added to setup 3D animations of DataArrays with transient data.
• Support for out of core computation by imod.prepare.Regridder if source is chunked.
• imod.ipf.read() now reports the problematic file if reading errors occur.
• imod.prepare.polygonize() added to polygonize DataArrays to GeoDataFrames.
• Added more support for multiple species imod-wq models, specifically: scalar concentration for boundary condition packages and well IPFs.
## [0.7.1] - 2019-08-07¶
• "multilinear" has been added as a regridding option to imod.prepare.Regridder to do linear interpolation up to three dimensions.
• Boundary condition packages in imod.wq support a method called add_timemap to do cyclical boundary conditions, such as summer and winter stages.
### Fixed¶
• imod.idf.save no longer fails on a single IDF when it is a voxel IDF (when it has top and bottom data).
• imod.prepare.celltable now succesfully does parallel chunkwise operations, rather than raising an error.
• imod.Regridder’s regrid method now succesfully returns source if all dimensions already have the right cell sizes, rather than raising an error.
• imod.idf.open_subdomains is much faster now at merging different subdomain IDFs of a parallel modflow simulation.
• imod.idf.save no longer suffers from extremely slow execution when the DataArray to save is chunked (it got extremely slow in some cases).
• Package checks in imod.wq.SeawatModel succesfully reduces over dimensions.
• Fix last case in imod.prepare.reproject where it did not allocate a new array yet, but returned like instead of the reprojected result.
## [0.7.0] - 2019-07-23¶
### Changed¶
• Namespaces: lift many functions one level, such that you can use e.g. the function imod.prepare.reproject instead of imod.prepare.reproject.reproject
### Removed¶
• All that was deprecated in v0.6.0
## [0.6.1] - 2019-04-17¶
• Support nonequidistant models in runfile
### Fixed¶
• Time conversion in runfile now also accepts cftime objects
## [0.6.0] - 2019-03-15¶
The primary change is that a number of functions have been renamed to better communicate what they do.
The load function name was not appropriate for IDFs, since the IDFs are not loaded into memory. Rather, they are opened and the headers are read; the data is only loaded when needed, in accordance with xarray’s design; compare for example xarray.open_dataset. The function has been renamed to open.
Similarly, load for IPFs has been deprecated. imod.ipf.read now reads both single and multiple IPF files into a single pandas.DataFrame.
### Removed¶
• imod.idf.setnodataheader
### Deprecated¶
• Opening IDFs with imod.idf.load, use imod.idf.open instead
• Opening a set of IDFs with imod.idf.loadset, use imod.idf.open_dataset instead
• Reading IPFs with imod.ipf.load, use imod.ipf.read
• Reading IDF data into a dask array with imod.idf.dask, use imod.idf._dask instead
• Reading an iMOD-seawat .tec file, use imod.tec.read instead.
### Changed¶
• Use np.datetime64 when dates are within time bounds, use cftime.DatetimeProlepticGregorian when they are not (matches xarray defaults)
• assert is no longer used to catch faulty input arguments, appropriate exceptions are raised instead
### Fixed¶
• idf.open: sorts both paths and headers consistently so data does not end up mixed up in the DataArray
• idf.open: Return an xarray.CFTimeIndex rather than an array of cftime.DatimeProlepticGregorian objects
• idf.save properly forwards nodata argument to write
• idf.write coerces coordinates to floats before writing
• ipf.read: Significant performance increase for reading IPF timeseries by specifying the datetime format
• ipf.write no longer writes ,, for missing data (which iMOD does not accept)
## [0.5.0] - 2019-02-26¶
### Removed¶
• Reading IDFs with the chunks option
### Deprecated¶
• Reading IDFs with the memmap option
• imod.idf.dataarray, use imod.idf.load instead
### Changed¶
• IDF: instead of res and transform attributes, use dx and dy coordinates (0D or 1D)
• Use cftime.DatetimeProlepticGregorian to support time instead of np.datetime64, allowing longer timespans
• Repository moved from https://gitlab.com/deltares/ to https://gitlab.com/deltares/imod/
• Notebook in examples folder for synthetic model example
• Support for nonequidistant IDF files, by adding dx and dy coordinates
• IPF support implicit itype
|
{}
|
CRISPRseek: alternative PAM sequence from NmCas9
1
0
Entering edit mode
Julie Zhu ★ 4.3k
@julie-zhu-3596
Last seen 28 days ago
United States
On Jun 11, 2015, at 7:36 AM, Gao, Xin (Daniel) <Xin.Gao@umassmed.edu> wrote:
Hi Julie,
Thank you very much for your great help! After running the codes under r, I found some points I still couldn't figure out by myself.
1)I ran results1 code to find gRNA. I noticed the code only works when I deleted "allowed.mismatch.PAM = 4,". I attach the case and error below.
Results1 <- offTargetAnalysis(inputFilePath, findgRNAsWithREcutOnly = FALSE,
findPairedgRNAOnly = FALSE,
BSgenomeName = Hsapiens, chromToSearch = "",PAM = "NNNNGATT", PAM.size=8, PAM.pattern="NNNNGATT$", txdb = TxDb.Hsapiens.UCSC.hg19.knownGene, orgAnn = org.Hs.egSYMBOL, max.mismatch = 3, outputDir = outputDir, allowed.mismatch.PAM = 4,overwrite = TRUE) Error in offTargetAnalysis(inputFilePath, findgRNAsWithREcutOnly = FALSE, : unused argument (allowed.mismatch.PAM = 4) 2)I tried to use PAM="NNNNGHTT" (H=[A|C|T] as you told us before) and it worked, but it only worked when I wrote "PAM = "NNNNGHTT" even leaving the PAM.pattern="NNNNGATT$" unchanged. I originally thought I should change PAM pattern instead of changing PAM?
3)If I changed max.mismatch to 0 or 1 or 2, I had the same gRNA results all the time. I am thinking max.mismatch only works in offtarget() function and it has nothing to do with finding gRNA, right?
4)I am curious will the CRISPRseek work faster with gRNA.size=24, PAM="GATT"? I tried to modify this code a little but unfortunately it didn't work. Maybe there is a problem with internal searching parameter? If it is too complicated to modify codes, we can still stick to this NNNNGATT pattern.
5)I haven't worked on off-target analysis carefully since it takes time to get updated 1.9.1 version. But it seems I encounter the same error if I keep "allowed.mismatch.PAM = 4,".
Thank you again if you could answer these questions.
Sincerely,
Daniel
From: Zhu, Lihua (Julie)
Sent: Wednesday, June 10, 2015 7:49 PM
To: Gao, Xin (Daniel)
Subject: Re: CRISPRseek to analyze NmCas9
Daniel,
Please see my answer to your question and code examples below given that you are interested in searching human genome. Attached are the analysis results of your sequences.
Best regards,
Julie
From: <Gao>, "Xin (Daniel)" <Xin.Gao@umassmed.edu>
Date: Wednesday, June 10, 2015 2:08 PM
To: Lihua Julie Zhu <julie.zhu@umassmed.edu>
Subject: RE: CRISPRseek to analyze NmCas9
Hi Julie,
1)The first question I have now is how to input the sequence instead of using the example sequence by writing proper codes. Now we are interested in searching gRNA at Chromosome 6 and 22. Please see the attached four sites we are interested in. I am also wondering is there any limitation on length of the input sequence, for example can we ask CRISPRseek to find all good targeting sites throughout chromosome6 or even the whole genome?
You first create a fasta file (plan text, see attached file as an example) and save the fasta file as inputSeq.fa in a directory, e.g., ~/CRISPRseek where ~ means your home directory
Then set the working directory to ~/CRISPRseek in R, and set inputFilePath and outputDir as follows.
setwd("~/CRISPRseek")
inputFilePath="~/Documents/ConsultingActivities/CRISPRseek/ErikSontheimer/inputSeq.fa"
outputDir <- getwd()
There is no limitation on length of the input sequence as long as you input the sequence as a fasta file. To just find gRNAs without off target analysis, it is doable with whole genome scan.
To find gRNAs without off target search, please set chromToSearch = ""
For example,
Results1 <- offTargetAnalysis(inputFilePath, findgRNAsWithREcutOnly = FALSE,
findPairedgRNAOnly = FALSE,
BSgenomeName = Hsapiens, chromToSearch = "",PAM = "NNNNGATT", PAM.size=8, PAM.pattern="NNNNGATT$", txdb = TxDb.Hsapiens.UCSC.hg19.knownGene, orgAnn = org.Hs.egSYMBOL, max.mismatch = 3, outputDir = outputDir, allowed.mismatch.PAM = 4,overwrite = TRUE) 2)After finding the best targeting place, we plan to predict the off-target effects of this gRNA. We want to know the possible off-target sites across the whole genome. At this point, should I modify the codes to let offtarget() search whole genome not only at the input sequence? To perform genome-wide off-target search for gRNAs in your input sequence, please set chromToSearch = "all", max.mismatch = 3 or a number you prefer. Please note that I have customized the code to search for gRNAs with NNNNGATT as PAM sequence. Results2 <- offTargetAnalysis(inputFilePath, findgRNAsWithREcutOnly = FALSE, findPairedgRNAOnly = FALSE, BSgenomeName = Hsapiens, chromToSearch = "all",PAM = "NNNNGATT", PAM.size=8, PAM.pattern="NNNNGATT$",
txdb = TxDb.Hsapiens.UCSC.hg19.knownGene,
orgAnn = org.Hs.egSYMBOL, max.mismatch = 3,
outputDir = outputDir, allowed.mismatch.PAM = 4,overwrite = TRUE)
Please find attached the analysis results allowing max.mismatch = 3. Please let me know if you spot any error. Thanks!
FYI, the above code for off target analysis only works with the development version of CRISPRseek . I have deposited the updated package (version 1.9.1) to Bioconductor site for you to download at http://bioconductor.org/packages/devel/bioc/html/CRISPRseek.html. It will take a couple of days for the updated package to become available.
Please do not hesitate to contact me if you need any clarification or help.
Thank you very much if you could work out the codes for us and answer my questions.
Daniel
From: Zhu, Lihua (Julie)
Sent: Wednesday, June 10, 2015 6:55 AM
To: Baehrecke, Eric
Cc: Gao, Xin (Daniel); Sontheimer, Erik
Subject: Re: CRISPRseek to analyze NmCas9
Whoops. Thank you very much, Eric!
Best,
Julie
On Jun 9, 2015, at 10:52 PM, Zhu, Lihua (Julie) <Julie.Zhu@umassmed.edu> wrote:
Daniel,
There is no gRNAs found that meet your PAM requirement inthe example sequence inputseq.fa provided by the software. Please remember to use your own sequence not the example sequence from the package for real search.
findgRNAs(inputFilePath = system.file("extdata","inputseq.fa", package = "CRISPRseek"),pairOutputFile = "testpairedgRNAs.xls",findPairedgRNAOnly = FALSE, PAM="NNNNGATT", PAM.size=8, gRNA.size = 20)
A DNAStringSet instance of length 0
Warning message:
In FUN(1L[[1L]], ...) : No gRNAs found in the input sequence Hsap_GATA1_ex2
To show that findgRNAs does find gRNAs with different PAM and different gRNA size , here is an example using the example sequence with PAM ="NNNNCAGG"
findgRNAs(inputFilePath = system.file("extdata","inputseq.fa", package = "CRISPRseek"),pairOutputFile = "testpairedgRNAs.xls",findPairedgRNAOnly = FALSE, PAM="NNNNCAGG", PAM.size=8, gRNA.size = 20)
A DNAStringSet instance of length 2
width seq names
[1] 28 CTCTGGTGTC...CAGAATCAGG Hsap_GATA1_ex2_gR34f
[2] 28 ATTCTGGTGT...CCAGAGCAGG Hsap_GATA1_ex2_gR25r
The hitsFile contains the results from a NGG search which is why you see NGG there. You should not need to use buildFeatureVectorForScoring since this function is called offTargetAnalysis function. The only function you need to use is offTargetAnalysis which calls all the other functions automatically.
hitsFile <- system.file("extdata", "hits.txt", package = "CRISPRseek")
buildFeatureVectorForScoring(hits,gRNA.size=28,canonical.PAM="GATT")
Could you please send me the sequence in chr6 you are interested in search for gRNAs and I will work out the code for you? Thanks!
Best regards,
Julie
From: <Gao>, "Xin (Daniel)" <Xin.Gao@umassmed.edu>
Date: Tuesday, June 9, 2015 6:44 PM
To: Lihua Julie Zhu <julie.zhu@umassmed.edu>
Subject: CRISPRseek to analyze NmCas9
Hi Julie,
I am the graduate student from Erik Sontheimer's lab. We met once in Erik's office to discuss how to use CRISPRseek to analyze NmCas9. I don't have much computational background so the questions may be very naive. As you know, our PAM is "GATT" instead of "NGG". I tried to use the examples in the PDF files by modifying a few criteria but unfortunately I couldn't make it work. One big question I have now is I can't modify the internal criteria by changing PAM from NGG to GATT, gRNA.size from 20 to 28. So I couldn't search chromosome 6 to find potential gRNA by CRISPRseek (by doing this I could validate a few gRNAs we knew at chr6 to make sure CRISPRseek works in our case). The example I used according to your PDF is in red below:
Usage:
findgRNAs(inputFilePath, format = "fasta", PAM = "GATT", PAM.size = 4, findPairedgRNAOnly = FALSE, gRNA.pattern = "", gRNA.size = 28, overlap.gRNA.positions = c(21,22), min.gap = 0, max.gap = 24, pairOutputFile, name.prefix = "", featureWeightMatrixFile = system.file("extdata", "DoenchNBT2014.csv", package = "CRISPRseek"), baseBeforegRNA = 4, baseAfterPAM = 3, calculategRNAEfficacy = FALSE, efficacyFile)
Example:
findgRNAs(inputFilePath = system.file("extdata","inputseq.fa", package = "CRISPRseek"),pairOutputFile = "testpairedgRNAs.xls",findPairedgRNAOnly = TRUE)
A DNAStringSet instance of length 2---the example can run under r but the result is under "NGG" criteria
Similar example is such as hitsFile <- system.file("extdata", "hits.txt", package = "CRISPRseek")
buildFeatureVectorForScoring(hits,gRNA.size=28,canonical.PAM="GATT")
I hope you could provide me some valuable suggestions on how to writing the language. If it's easier for you to answer face-by-face, I could bring my laptop and visit your office whenever you are available. Thank you very much!
Sincerely,
Xin Gao (Daniel)
PhD Student
University of Massachusetts Medical School
CRISPRseek gRNA searching offtarget analysis NmCas9 • 1.4k views
0
Entering edit mode
Julie Zhu ★ 4.3k
@julie-zhu-3596
Last seen 28 days ago
United States
Daniel,
The parameter allowed.mismatch.PAM is a new parameter I added yesterday to the new version of the package and therefore it will not be recognized by the older version of CRISPRseek.
This parameter will not affect the gRNA search. Please remember to change allowed.mismatch.PAM to 5 if the gRNA PAM.pattern is NNNNGHTT
BTW, PAM is for gRNA search and PAM.pattern is for off target search.
max.mismatch is for off target search only.
For simplicity, I suggest stick to gRNA.size=20 for now. There are vectors need to be modified to make gRNA search of different sizes to work.
Best regards,
Julie
0
Entering edit mode
|
{}
|
Select Page
# RMO 2019 solutions with sequential hints
Regional Mathematics Olympiad, India (RMO) 2019… try the problems. We give sequential hints leading up to complete solution.
## Problems in Regional Math Olympiad 2019 (this is being updated continually… stay tuned)
1. Suppose x is a non zero real number such that both $$x^5$$ and $$20 x + \frac{19}{x}$$ are rational numbers. Prove that x is a rational number.
2. Let ABC be a triangle with circumcircle $$\Omega$$ and let G be the centroid of the triangle ABC. Extend AG, BG, and CG to meet $$\Omega$$ again at $$A_1, B_1$$ and $$C_1$$ respectively. Suppose $$\angle BAC = \angle A_1B_1C_1 , \angle ABC = \angle A_1 C_1 B_1$$ and $$\angle ACB = \angle B_1 A_1 C_1$$. Prove that ABC and $$A_1B_1C_1$$ are equilateral triangles.
3. Let a, b, c be positive real numbers such that a + b + c = 1. Prove that $$\frac {a} {a^2 + b^3 + c^3} + \frac {b}{ b^2 + c^3 + a^3 } + \frac {c} { c^2 + a^3 + b^3 } \leq \frac{1}{5abc}$$
4. Consider the following $$3 \times 2$$ array formed by the numbers 1, 2, 3, 4, 5, 6:
$$\begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{33} \end{bmatrix} = \begin{bmatrix} 1 & 6 \\ 2 & 5 \\ 3 & 4 \end{bmatrix}$$
Observe that all row sums are equal, but the sum of the squares in not the same for each row. Extend the above array to a $$3 \times k$$ array $${a_{(ij)}_{3\times k }$$ for a suitable k adding more columns using the numbers 7, 8, 9, …, 3k such that
$$\sum_{j=1}^k a_{1j} = \sum_{j=1}^k a_{2j} = \sum_{j=1}^k a_{3j}, \sum_{j=1}^k (a_{1j})^2 = \sum_{j=1}^k (a_{2j})^2 = \sum_{j=1}^k (a_{3j})^2$$
5. In an acute angled triangle ABC, let H be the orthocenter, and let D, E, F be the feet of the altitudes from A, B, C to the opposite sides, respectively. Let L, M, N be the midpoints of the segments AH, EF, BC respectively. Let X, Y be feet of altitudes from L, N on to the line DF. Prove that XM is perpendicular to MY.
6. Suppose 91 distinct positive integers greater than 1 are given such that there are atleast 456 pairs among them which are relatively prime. So that one can find four integers a, b, c, d among them such that gcd (a, b) = gcd (b, c) = gcd (c, a) = 1
# Rest of india
## What is coming up?
Seminar on how to solve Math Olympiad Problems by Anushka Aggarwal
# Faculty panel for Math Olympiad
#### SRIJIT MUKHERJEE
Director, Faculty Cheenta
Srijit Mukherjee is a B.Stat from Indian Statistical Institute. He is pursuing M.Stat from I.S.I. He is a director and faculty at Cheenta.
Sankhadip Chakraborty is an INMO awardee. He has a B.Sc. in Mathematics from CMI and is pursuing Ph.D. at IMPA, Brazil.
#### Ishan Sengupta
Faculty at Cheenta
Ishan Sengupta is pursuing B.Stat from Indian Statistical Institute, Kolkata. He is a faculty at Cheenta.
#### A.R. Sricharan
Faculty, Cheenta
A.R. Sricharan is a B.Sc. in Mathematics from Chennai Mathematical Institute. He is pursuing M.Sc. from CMI and is a faculty at Cheenta
# Try some sequential hints
## Geometry of AM GM Inequality
AM GM Inequality has a geometric interpretation. Watch the video discussion on it and try some hint problems to sharpen your skills.
## Geometry of Cauchy Schwarz Inequality
Cauchy Schwarz Inequality is a powerful tool in Algebra. However it also has a geometric meaning. We provide video and problem sequence to explore that.
## AMC 10A Year 2014 Problem 20 Sequential Hints
A challenging number theory problem. Here the main idea is the visualization of a pattern of which appeared in the multiplication.
## RMO 2019 Maharashtra and Goa Problem 2 Geometry
Understand the problemGiven a circle $latex \Gamma$, let $latex P$ be a point in its interior, and let $latex l$ be a line passing through $latex P$. Construct with proof using a ruler and compass, all circles which pass through $latex P$, are tangent to \$latex...
## RMO 2019 (Maharashtra Goa) Adding GCDs
Can you add GCDs? This problem from RMO 2019 (Maharashtra region) has a beautiful solution. We also give some bonus questions for you to try.
## Number Theory, Ireland MO 2018, Problem 9
This problem in number theory is an elegant applications of the ideas of quadratic and cubic residues of a number. Try with our sequential hints.
## Number Theory, France IMO TST 2012, Problem 3
This problem is an advanced number theory problem using the ideas of lifting the exponents. Try with our sequential hints.
## Combinatorics – AMC 10A 2008 Problem 23 Sequential Hints
AMC 10A 2008, Problem 23 needed a clever trick of set theory and combinations. See the solution with sequential hints for a subset theory-based problem
## Algebra, Austria MO 2016, Problem 4
This algebra problem is an elegant application of culminating the ideas of polynomials to give a simple proof of an inequality. Try with our sequential hints.
## Number Theory, Cyprus IMO TST 2018, Problem 1
This problem is a beautiful and simple application of the ideas of inequality and bounds in number theory. Try with our sequential hints.
# some testimonials.
## Jayanta Majumdar, Glasgow, UK
"We contacted Cheenta because our son, Sambuddha (a.k.a. Sam), seemed to have something of a gift in mathematical/logical thinking, and his school curriculum math was way too easy and boring for him. We were overjoyed when Mr Ashani Dasgupta administered an admission test and accepted Sam as a one-to-one student at Cheenta. Ever since it has been an excellent experience and we have nothing but praise for Mr Dasgupta. His enthusiasm for mathematics is infectious, and admirable is the amount of energy and thought he puts into each lesson. He covers a wide range of mathematical topics, and every lesson is packed with insights and methods. We are extremely pleased with the difference he has been making. Under his tutelage Sam has secured several gold awards from the UK Mathematics Trust (UKMT) and Scottish Mathematical Council (SMC). Recently Sam received a book award from the UKMT and got invited to masterclass sessions also organised by the UKMT. Mr Dasgupta's tutoring was crucial for these achievements. We think Cheenta is rendering an excellent service to humanity by identifying young mathematical minds and nurturing them towards becoming inspired mathematicians of the future."
## Shubhrangshu Das, Bangalore, India
"My son, Shuborno has been studying under Ashani from last one year. During this period, we have seen our son grow both intellectually and emotionally. His concepts and approach towards solving a problem has become more mature now. Not that he can solve each and every problem but he loves to think on the tough concepts. For this, all credit goes to Ashani, who is never in a hurry to solve a problem quickly. Rather he tries to slowly build the foundation of the students by discussing even minute concepts. His style of teaching is also unique combining different concepts and giving mathematics a more holistic approach. He is also very motivating and helpful. We are lucky that our son is under such good guidance. Rare to get such a dedicated teacher."
|
{}
|
# oemof.solph package¶
## oemof.solph.blocks module¶
Creating sets, variables, constraints and parts of the objective function for the specified groups.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/blocks.py
class oemof.solph.blocks.Bus(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for all balanced buses.
The following constraints are build:
Bus balance om.Bus.balance[i, o, t]
$\begin{split}\sum_{i \in INPUTS(n)} flow(i, n, t) = \sum_{o \in OUTPUTS(n)} flow(n, o, t), \\ \forall n \in \textrm{BUSES}, \forall t \in \textrm{TIMESTEPS}.\end{split}$
class oemof.solph.blocks.Flow(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Flow block with definitions for standard flows.
The following variables are created:
Difference of a flow in consecutive timesteps if flow is reduced indexed by NEGATIVE_GRADIENT_FLOWS, TIMESTEPS.
Difference of a flow in consecutive timesteps if flow is increased indexed by NEGATIVE_GRADIENT_FLOWS, TIMESTEPS.
The following sets are created: (-> see basic sets at Model )
SUMMED_MAX_FLOWS
A set of flows with the attribute summed_max being not None.
SUMMED_MIN_FLOWS
A set of flows with the attribute summed_min being not None.
A set of flows with the attribute negative_gradient being not None.
A set of flows with the attribute positive_gradient being not None
INTEGER_FLOWS
A set of flows where the attribute integer is True (forces flow to only take integer values)
The following constraints are build:
Flow max sum om.Flow.summed_max[i, o]
$\begin{split}\sum_t flow(i, o, t) \cdot \tau \leq summed\_max(i, o) \cdot nominal\_value(i, o), \\ \forall (i, o) \in \textrm{SUMMED\_MAX\_FLOWS}.\end{split}$
Flow min sum om.Flow.summed_min[i, o]
$\begin{split}\sum_t flow(i, o, t) \cdot \tau \geq summed\_min(i, o) \cdot nominal\_value(i, o), \\ \forall (i, o) \in \textrm{SUMMED\_MIN\_FLOWS}.\end{split}$
om.Flow.negative_gradient_constr[i, o]:
$\begin{split}flow(i, o, t-1) - flow(i, o, t) \geq \ negative\_gradient(i, o, t), \\ \forall (i, o) \in \textrm{NEGATIVE\_GRADIENT\_FLOWS}, \\ \forall t \in \textrm{TIMESTEPS}.\end{split}$
om.Flow.positive_gradient_constr[i, o]:
$\begin{split}flow(i, o, t) - flow(i, o, t-1) \geq \ positive\__gradient(i, o, t), \\ \forall (i, o) \in \textrm{POSITIVE\_GRADIENT\_FLOWS}, \\ \forall t \in \textrm{TIMESTEPS}.\end{split}$
The following parts of the objective function are created:
If variable_costs are set by the user:
$\sum_{(i,o)} \sum_t flow(i, o, t) \cdot variable\_costs(i, o, t)$
The expression can be accessed by om.Flow.variable_costs and their value after optimization by om.Flow.variable_costs() .
class oemof.solph.blocks.InvestmentFlow(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for all flows with Investment being not None.
See oemof.solph.options.Investment for all parameters of the Investment class.
See oemof.solph.network.Flow for all parameters of the Flow class.
Variables
All InvestmentFlow are indexed by a starting and ending node $$(i, o)$$, which is omitted in the following for the sake of convenience. The following variables are created:
• $$P(t)$$
Actual flow value (created in oemof.solph.models.BaseModel).
• $$P_{invest}$$
Value of the investment variable, i.e. equivalent to the nominal value of the flows after optimization.
• $$b_{invest}$$
Binary variable for the status of the investment, if nonconvex is True.
Constraints
Depending on the attributes of the InvestmentFlow and Flow, different constraints are created. The following constraint is created for all InvestmentFlow:
Upper bound for the flow value
$P(t) \le ( P_{invest} + P_{exist} ) \cdot f_{max}(t)$
Depeding on the attribute nonconvex, the constraints for the bounds of the decision variable $$P_{invest}$$ are different:
• nonconvex = False
$P_{invest, min} \le P_{invest} \le P_{invest, max}$
• nonconvex = True
$\begin{split}& P_{invest, min} \cdot b_{invest} \le P_{invest}\\ & P_{invest} \le P_{invest, max} \cdot b_{invest}\\\end{split}$
For all InvestmentFlow (independent of the attribute nonconvex), the following additional constraints are created, if the appropriate attribute of the Flow (see oemof.solph.network.Flow) is set:
• fixed=True
Actual value constraint for investments with fixed flow values
$P(t) = ( P_{invest} + P_{exist} ) \cdot f_{actual}(t)$
• min != 0
Lower bound for the flow values
$P(t) \geq ( P_{invest} + P_{exist} ) \cdot f_{min}(t)$
• summed_max is not None
Upper bound for the sum of all flow values (e.g. maximum full load hours)
$\sum_t P(t) \cdot \tau(t) \leq ( P_{invest} + P_{exist} ) \cdot f_{sum, min}$
• summed_min is not None
Lower bound for the sum of all flow values (e.g. minimum full load hours)
$\sum_t P(t) \cdot \tau(t) \geq ( P_{invest} + P_{exist} ) \cdot f_{sum, min}$
Objective function
The part of the objective function added by the InvestmentFlow also depends on whether a convex or nonconvex InvestmentFlow is selected. The following parts of the objective function are created:
• nonconvex = False
$P_{invest} \cdot c_{invest,var}$
• nonconvex = True
$\begin{split}P_{invest} \cdot c_{invest,var} + c_{invest,fix} \cdot b_{invest}\\\end{split}$
The total value of all costs of all InvestmentFlow can be retrieved calling om.InvestmentFlow.investment_costs.expr().
List of Variables (in csv table syntax)
symbol attribute explanation
$$P(t)$$ flow[n, o, t] Actual flow value
$$P_{invest}$$ invest[i, o] Invested flow capacity
$$b_{invest}$$ invest_status[i, o] Binary status of investment
List of Variables (in rst table syntax):
symbol attribute explanation
$$P(t)$$ flow[n, o, t] Actual flow value
$$P_{invest}$$ invest[i, o] Invested flow capacity
$$b_{invest}$$ invest_status[i, o] Binary status of investment
Grid table style:
symbol attribute explanation
$$P(t)$$ flow[n, o, t] Actual flow value
$$P_{invest}$$ invest[i, o] Invested flow capacity
$$b_{invest}$$ invest_status[i, o] Binary status of investment
List of Parameters
symbol attribute explanation
$$P_{exist}$$ flows[i, o].investment.existing Existing flow capacity
$$P_{invest,min}$$ flows[i, o].investment.minimum Minimum investment capacity
$$P_{invest,max}$$ flows[i, o].investment.maximum Maximum investment capacity
$$c_{invest,var}$$ flows[i, o].investment.ep_costs Variable investment costs
$$c_{invest,fix}$$ flows[i, o].investment.offset Fix investment costs
$$f_{actual}$$ flows[i, o].actual_value[t] Normed fixed value for the flow variable
$$f_{max}$$ flows[i, o].max[t] Normed maximum value of the flow
$$f_{min}$$ flows[i, o].min[t] Normed minimum value of the flow
$$f_{sum,max}$$ flows[i, o].summed_max Specific maximum of summed flow values (per installed capacity)
$$f_{sum,min}$$ flows[i, o].summed_min Specific minimum of summed flow values (per installed capacity)
$$\tau(t)$$ timeincrement[t] Time step width for each time step
Note
In case of a nonconvex investment flow (nonconvex=True), the existing flow capacity $$P_{exist}$$ needs to be zero. At least, it is not tested yet, whether this works out, or makes any sense at all.
class oemof.solph.blocks.NonConvexFlow(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
The following sets are created: (-> see basic sets at
Model )
A set of flows with the attribute nonconvex of type
options.NonConvex.
MIN_FLOWS
A subset of set NONCONVEX_FLOWS with the attribute min being not None in the first timestep.
ACTIVITYCOSTFLOWS
A subset of set NONCONVEX_FLOWS with the attribute activity_costs being not None.
STARTUPFLOWS
A subset of set NONCONVEX_FLOWS with the attribute maximum_startups or startup_costs being not None.
MAXSTARTUPFLOWS
A subset of set STARTUPFLOWS with the attribute maximum_startups being not None.
SHUTDOWNFLOWS
A subset of set NONCONVEX_FLOWS with the attribute maximum_shutdowns or shutdown_costs being not None.
MAXSHUTDOWNFLOWS
A subset of set SHUTDOWNFLOWS with the attribute maximum_shutdowns being not None.
MINUPTIMEFLOWS
A subset of set NONCONVEX_FLOWS with the attribute minimum_uptime being not None.
MINDOWNTIMEFLOWS
A subset of set NONCONVEX_FLOWS with the attribute minimum_downtime being not None.
The following variables are created:
Status variable (binary) om.NonConvexFlow.status:
Variable indicating if flow is >= 0 indexed by FLOWS
Startup variable (binary) om.NonConvexFlow.startup:
Variable indicating startup of flow (component) indexed by STARTUPFLOWS
Shutdown variable (binary) om.NonConvexFlow.shutdown:
Variable indicating shutdown of flow (component) indexed by SHUTDOWNFLOWS
The following constraints are created:
Minimum flow constraint om.NonConvexFlow.min[i,o,t]
$\begin{split}flow(i, o, t) \geq min(i, o, t) \cdot nominal\_value \ \cdot status(i, o, t), \\ \forall t \in \textrm{TIMESTEPS}, \\ \forall (i, o) \in \textrm{NONCONVEX\_FLOWS}.\end{split}$
Maximum flow constraint om.NonConvexFlow.max[i,o,t]
$\begin{split}flow(i, o, t) \leq max(i, o, t) \cdot nominal\_value \ \cdot status(i, o, t), \\ \forall t \in \textrm{TIMESTEPS}, \\ \forall (i, o) \in \textrm{NONCONVEX\_FLOWS}.\end{split}$
Startup constraint om.NonConvexFlow.startup_constr[i,o,t]
$\begin{split}startup(i, o, t) \geq \ status(i,o,t) - status(i, o, t-1) \\ \forall t \in \textrm{TIMESTEPS}, \\ \forall (i,o) \in \textrm{STARTUPFLOWS}.\end{split}$
Maximum startups constraint
om.NonConvexFlow.max_startup_constr[i,o,t]
$\sum_{t \in \textrm{TIMESTEPS}} startup(i, o, t) \leq \ N_{start}(i,o) \forall (i,o) \in \textrm{MAXSTARTUPFLOWS}.$
Shutdown constraint om.NonConvexFlow.shutdown_constr[i,o,t]
$\begin{split}shutdown(i, o, t) \geq \ status(i, o, t-1) - status(i, o, t) \\ \forall t \in \textrm{TIMESTEPS}, \\ \forall (i, o) \in \textrm{SHUTDOWNFLOWS}.\end{split}$
Maximum shutdowns constraint
om.NonConvexFlow.max_startup_constr[i,o,t]
$\sum_{t \in \textrm{TIMESTEPS}} startup(i, o, t) \leq \ N_{shutdown}(i,o) \forall (i,o) \in \textrm{MAXSHUTDOWNFLOWS}.$
Minimum uptime constraint om.NonConvexFlow.uptime_constr[i,o,t]
$\begin{split}(status(i, o, t)-status(i, o, t-1)) \cdot minimum\_uptime(i, o) \\ \leq \sum_{n=0}^{minimum\_uptime-1} status(i,o,t+n) \\ \forall t \in \textrm{TIMESTEPS} | \\ t \neq \{0..minimum\_uptime\} \cup \ \{t\_max-minimum\_uptime..t\_max\} , \\ \forall (i,o) \in \textrm{MINUPTIMEFLOWS}. \\ \\ status(i, o, t) = initial\_status(i, o) \\ \forall t \in \textrm{TIMESTEPS} | \\ t = \{0..minimum\_uptime\} \cup \ \{t\_max-minimum\_uptime..t\_max\} , \\ \forall (i,o) \in \textrm{MINUPTIMEFLOWS}.\end{split}$
Minimum downtime constraint om.NonConvexFlow.downtime_constr[i,o,t]
$\begin{split}(status(i, o, t-1)-status(i, o, t)) \ \cdot minimum\_downtime(i, o) \\ \leq minimum\_downtime(i, o) \ - \sum_{n=0}^{minimum\_downtime-1} status(i,o,t+n) \\ \forall t \in \textrm{TIMESTEPS} | \\ t \neq \{0..minimum\_downtime\} \cup \ \{t\_max-minimum\_downtime..t\_max\} , \\ \forall (i,o) \in \textrm{MINDOWNTIMEFLOWS}. \\ \\ status(i, o, t) = initial\_status(i, o) \\ \forall t \in \textrm{TIMESTEPS} | \\ t = \{0..minimum\_downtime\} \cup \ \{t\_max-minimum\_downtime..t\_max\} , \\ \forall (i,o) \in \textrm{MINDOWNTIMEFLOWS}.\end{split}$
The following parts of the objective function are created:
If nonconvex.startup_costs is set by the user:
$\sum_{i, o \in STARTUPFLOWS} \sum_t startup(i, o, t) \ \cdot startup\_costs(i, o)$
If nonconvex.shutdown_costs is set by the user:
$\sum_{i, o \in SHUTDOWNFLOWS} \sum_t shutdown(i, o, t) \ \cdot shutdown\_costs(i, o)$
If nonconvex.activity_costs is set by the user:
$\sum_{i, o \in ACTIVITYCOSTFLOWS} \sum_t status(i, o, t) \ \cdot activity\_costs(i, o)$
class oemof.solph.blocks.Transformer(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for the linear relation of nodes with type Transformer
The following sets are created: (-> see basic sets at Model )
TRANSFORMERS
A set with all Transformer objects.
The following constraints are created:
Linear relation om.Transformer.relation[i,o,t]
$\begin{split}flow(i, n, t) / conversion\_factor(n, i, t) = \ flow(n, o, t) / conversion\_factor(n, o, t), \\ \forall t \in \textrm{TIMESTEPS}, \\ \forall n \in \textrm{TRANSFORMERS}, \\ \forall i \in \textrm{INPUTS(n)}, \\ \forall o \in \textrm{OUTPUTS(n)}.\end{split}$
## oemof.solph.components module¶
This module is designed to hold components with their classes and associated individual constraints (blocks) and groupings. Therefore this module holds the class definition and the block directly located by each other.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/components.py
class oemof.solph.components.ExtractionTurbineCHP(conversion_factor_full_condensation, *args, **kwargs)[source]
A CHP with an extraction turbine in a linear model. For more options see the GenericCHP class.
One main output flow has to be defined and is tapped by the remaining flow. The conversion factors have to be defined for the maximum tapped flow ( full CHP mode) and for no tapped flow (full condensing mode). Even though it is possible to limit the variability of the tapped flow, so that the full condensing mode will never be reached.
Parameters: conversion_factors (dict) – Dictionary containing conversion factors for conversion of inflow to specified outflow. Keys are output bus objects. The dictionary values can either be a scalar or a sequence with length of time horizon for simulation. conversion_factor_full_condensation (dict) – The efficiency of the main flow if there is no tapped flow. Only one key is allowed. Use one of the keys of the conversion factors. The key indicates the main flow. The other output flow is the tapped flow.
Note
The following sets, variables, constraints and objective parts are created
Examples
>>> from oemof import solph
>>> bel = solph.Bus(label='electricityBus')
>>> bth = solph.Bus(label='heatBus')
>>> bgas = solph.Bus(label='commodityBus')
>>> et_chp = solph.components.ExtractionTurbineCHP(
... label='variable_chp_gas',
... inputs={bgas: solph.Flow(nominal_value=10e10)},
... outputs={bel: solph.Flow(), bth: solph.Flow()},
... conversion_factors={bel: 0.3, bth: 0.5},
... conversion_factor_full_condensation={bel: 0.5})
constraint_group()[source]
class oemof.solph.components.ExtractionTurbineCHPBlock(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for the linear relation of nodes with type ExtractionTurbineCHP
The following two constraints are created:
$\begin{split}& (1)\dot H_{Fuel}(t) = \frac{P_{el}(t) + \dot Q_{th}(t) \cdot \beta(t)} {\eta_{el,woExtr}(t)} \\ & (2)P_{el}(t) \geq \dot Q_{th}(t) \cdot C_b = \dot Q_{th}(t) \cdot \frac{\eta_{el,maxExtr}(t)} {\eta_{th,maxExtr}(t)}\end{split}$
where $$\beta$$ is defined as:
$\beta(t) = \frac{\eta_{el,woExtr}(t) - \eta_{el,maxExtr}(t)}{\eta_{th,maxExtr}(t)}$
where the first equation is the result of the relation between the input flow and the two output flows, the second equation stems from how the two output flows relate to each other, and the symbols used are defined as follows (with Variables (V) and Parameters (P)):
symbol attribute type explanation
$$\dot H_{Fuel}$$ flow[i, n, t] V fuel input flow
$$P_{el}$$ flow[n, main_output, t] V electric power
$$\dot Q_{th}$$ flow[n, tapped_output, t] V thermal output
$$\beta$$ main_flow_loss_index[n, t] P power loss index
$$\eta_{el,woExtr}$$ conversion_factor_full_condensation[n, t] P electric efficiency without heat extraction
$$\eta_{el,maxExtr}$$ conversion_factors[main_output][n, t] P electric efficiency with max heat extraction
$$\eta_{th,maxExtr}$$ conversion_factors[tapped_output][n, t] P thermal efficiency with maximal heat extraction
CONSTRAINT_GROUP = True
class oemof.solph.components.GenericCHP(*args, **kwargs)[source]
Bases: oemof.network.network.Transformer
Component GenericCHP to model combined heat and power plants.
Can be used to model (combined cycle) extraction or back-pressure turbines and used a mixed-integer linear formulation. Thus, it induces more computational effort than the ExtractionTurbineCHP for the benefit of higher accuracy.
The full set of equations is described in: Mollenhauer, E., Christidis, A. & Tsatsaronis, G. Evaluation of an energy- and exergy-based generic modeling approach of combined heat and power plants Int J Energy Environ Eng (2016) 7: 167. https://doi.org/10.1007/s40095-016-0204-6
For a general understanding of (MI)LP CHP representation, see: Fabricio I. Salgado, P. Short - Term Operation Planning on Cogeneration Systems: A Survey Electric Power Systems Research (2007) Electric Power Systems Research Volume 78, Issue 5, May 2008, Pages 835-848 https://doi.org/10.1016/j.epsr.2007.06.001
Note
An adaption for the flow parameter H_L_FG_share_max has been made to set the flue gas losses at maximum heat extraction H_L_FG_max as share of the fuel flow H_F e.g. for combined cycle extraction turbines. The flow parameter H_L_FG_share_min can be used to set the flue gas losses at minimum heat extraction H_L_FG_min as share of the fuel flow H_F e.g. for motoric CHPs. The boolean component parameter back_pressure can be set to model back-pressure characteristics.
Also have a look at the examples on how to use it.
Parameters: fuel_input (dict) – Dictionary with key-value-pair of oemof.Bus and oemof.Flow object for the fuel input. electrical_output (dict) – Dictionary with key-value-pair of oemof.Bus and oemof.Flow object for the electrical output. Related parameters like P_max_woDH are passed as attributes of the oemof.Flow object. heat_output (dict) – Dictionary with key-value-pair of oemof.Bus and oemof.Flow object for the heat output. Related parameters like Q_CW_min are passed as attributes of the oemof.Flow object. Beta (list of numerical values) – Beta values in same dimension as all other parameters (length of optimization period). back_pressure (boolean) – Flag to use back-pressure characteristics. Set to True and Q_CW_min to zero for back-pressure turbines. See paper above for more information.
Note
The following sets, variables, constraints and objective parts are created
Examples
>>> from oemof import solph
>>> bel = solph.Bus(label='electricityBus')
>>> bth = solph.Bus(label='heatBus')
>>> bgas = solph.Bus(label='commodityBus')
>>> ccet = solph.components.GenericCHP(
... label='combined_cycle_extraction_turbine',
... fuel_input={bgas: solph.Flow(
... H_L_FG_share_max=[0.183])},
... electrical_output={bel: solph.Flow(
... P_max_woDH=[155.946],
... P_min_woDH=[68.787],
... Eta_el_max_woDH=[0.525],
... Eta_el_min_woDH=[0.444])},
... heat_output={bth: solph.Flow(
... Q_CW_min=[10.552])},
... Beta=[0.122], back_pressure=False)
>>> type(ccet)
<class 'oemof.solph.components.GenericCHP'>
alphas
Compute or return the _alphas attribute.
constraint_group()[source]
class oemof.solph.components.GenericCHPBlock(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for the relation of the $$n$$ nodes with type class:.GenericCHP.
The following constraints are created:
$\begin{split}& (1)\qquad \dot{H}_F(t) = fuel\ input \\ & (2)\qquad \dot{Q}(t) = heat\ output \\ & (3)\qquad P_{el}(t) = power\ output\\ & (4)\qquad \dot{H}_F(t) = \alpha_0(t) \cdot Y(t) + \alpha_1(t) \cdot P_{el,woDH}(t)\\ & (5)\qquad \dot{H}_F(t) = \alpha_0(t) \cdot Y(t) + \alpha_1(t) \cdot ( P_{el}(t) + \beta \cdot \dot{Q}(t) )\\ & (6)\qquad \dot{H}_F(t) \leq Y(t) \cdot \frac{P_{el, max, woDH}(t)}{\eta_{el,max,woDH}(t)}\\ & (7)\qquad \dot{H}_F(t) \geq Y(t) \cdot \frac{P_{el, min, woDH}(t)}{\eta_{el,min,woDH}(t)}\\ & (8)\qquad \dot{H}_{L,FG,max}(t) = \dot{H}_F(t) \cdot \dot{H}_{L,FG,sharemax}(t)\\ & (9)\qquad \dot{H}_{L,FG,min}(t) = \dot{H}_F(t) \cdot \dot{H}_{L,FG,sharemin}(t)\\ & (10)\qquad P_{el}(t) + \dot{Q}(t) + \dot{H}_{L,FG,max}(t) + \dot{Q}_{CW, min}(t) \cdot Y(t) = / \leq \dot{H}_F(t)\\\end{split}$
where $$= / \leq$$ depends on the CHP being back pressure or not.
The coefficients $$\alpha_0$$ and $$\alpha_1$$ can be determined given the efficiencies maximal/minimal load:
$\begin{split}& \eta_{el,max,woDH}(t) = \frac{P_{el,max,woDH}(t)}{\alpha_0(t) \cdot Y(t) + \alpha_1(t) \cdot P_{el,max,woDH}(t)}\\ & \eta_{el,min,woDH}(t) = \frac{P_{el,min,woDH}(t)}{\alpha_0(t) \cdot Y(t) + \alpha_1(t) \cdot P_{el,min,woDH}(t)}\\\end{split}$
For the attribute $$\dot{H}_{L,FG,min}$$ being not None, e.g. for a motoric CHP, the following is created:
Constraint:
$\begin{split}& (11)\qquad P_{el}(t) + \dot{Q}(t) + \dot{H}_{L,FG,min}(t) + \dot{Q}_{CW, min}(t) \cdot Y(t) \geq \dot{H}_F(t)\\[10pt]\end{split}$
The symbols used are defined as follows (with Variables (V) and Parameters (P)):
math. symbol attribute type explanation
$$\dot{H}_{F}$$ H_F[n,t] V input of enthalpy through fuel input
$$P_{el}$$ P[n,t] V provided electric power
$$P_{el,woDH}$$ P_woDH[n,t] V electric power without district heating
$$P_{el,min,woDH}$$ P_min_woDH[n,t] P min. electric power without district heating
$$P_{el,max,woDH}$$ P_max_woDH[n,t] P max. electric power without district heating
$$\dot{Q}$$ Q[n,t] V provided heat
$$\dot{Q}_{CW, min}$$ Q_CW_min[n,t] P minimal therm. condenser load to cooling water
$$\dot{H}_{L,FG,min}$$ H_L_FG_min[n,t] V flue gas enthalpy loss at min heat extraction
$$\dot{H}_{L,FG,max}$$ H_L_FG_max[n,t] V flue gas enthalpy loss at max heat extraction
$$\dot{H}_{L,FG,sharemin}$$ H_L_FG_share_min[n,t] P share of flue gas loss at min heat extraction
$$\dot{H}_{L,FG,sharemax}$$ H_L_FG_share_max[n,t] P share of flue gas loss at max heat extraction
$$Y$$ Y[n,t] V status variable on/off
$$\alpha_0$$ n.alphas[0][n,t] P coefficient describing efficiency
$$\alpha_1$$ n.alphas[1][n,t] P coefficient describing efficiency
$$\beta$$ Beta[n,t] P power loss index
$$\eta_{el,min,woDH}$$ Eta_el_min_woDH[n,t] P el. eff. at min. fuel flow w/o distr. heating
$$\eta_{el,max,woDH}$$ Eta_el_max_woDH[n,t] P el. eff. at max. fuel flow w/o distr. heating
CONSTRAINT_GROUP = True
class oemof.solph.components.GenericInvestmentStorageBlock(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for all storages with Investment being not None. See oemof.solph.options.Investment for all parameters of the Investment class.
Variables
All Storages are indexed by $$n$$, which is omitted in the following for the sake of convenience. The following variables are created as attributes of om.InvestmentStorage:
• $$P_i(t)$$
Inflow of the storage (created in oemof.solph.models.BaseModel).
• $$P_o(t)$$
Outflow of the storage (created in oemof.solph.models.BaseModel).
• $$E(t)$$
Energy currently stored / Absolute level of storaged energy.
• $$E_{invest}$$
Invested (nominal) capacity of the storage.
• $$E(-1)$$
Initial storage capacity (before timestep 0).
• $$b_{invest}$$
Binary variable for the status of the investment, if nonconvex is True.
Constraints
The following constraints are created for all investment storages:
Storage balance (Same as for GenericStorageBlock)
$\begin{split}E(t) = &E(t-1) \cdot (1 - \beta(t)) ^{\tau(t)/(t_u)} \\ &- \gamma(t)\cdot (E_{exist} + E_{invest}) \cdot {\tau(t)/(t_u)}\\ &- \delta(t) \cdot {\tau(t)/(t_u)}\\ &- \frac{P_o(t)}{\eta_o(t)} \cdot \tau(t) + P_i(t) \cdot \eta_i(t) \cdot \tau(t)\end{split}$
Depending on the attribute nonconvex, the constraints for the bounds of the decision variable $$E_{invest}$$ are different:
• nonconvex = False
$E_{invest, min} \le E_{invest} \le E_{invest, max}$
• nonconvex = True
$\begin{split}& E_{invest, min} \cdot b_{invest} \le E_{invest}\\ & E_{invest} \le E_{invest, max} \cdot b_{invest}\\\end{split}$
The following constraints are created depending on the attributes of the components.GenericStorage:
• initial_storage_level is None
Constraint for a variable initial storage level:
$E(-1) \le E_{invest} + E_{exist}$
• initial_storage_level is not None
An initial value for the storage content is given:
$E(-1) = (E_{invest} + E_{exist}) \cdot c(-1)$
• balanced=True
The energy content of storage of the first and the last timestep are set equal:
$E(-1) = E(t_{last})$
• invest_relation_input_capacity is not None
Connect the invest variables of the storage and the input flow:
$P_{i,invest} + P_{i,exist} = (E_{invest} + E_{exist}) \cdot r_{cap,in}$
• invest_relation_output_capacity is not None
Connect the invest variables of the storage and the output flow:
$P_{o,invest} + P_{o,exist} = (E_{invest} + E_{exist}) \cdot r_{cap,out}$
• invest_relation_input_output is not None
Connect the invest variables of the input and the output flow:
$P_{i,invest} + P_{i,exist} = (P_{o,invest} + P_{o,exist}) \cdot r_{in,out}$
• max_storage_level
Rule for upper bound constraint for the storage content:
$E(t) \leq E_{invest} \cdot c_{max}(t)$
• min_storage_level
Rule for lower bound constraint for the storage content:
$E(t) \geq E_{invest} \cdot c_{min}(t)$
Objective function
The part of the objective function added by the investment storages also depends on whether a convex or nonconvex investment option is selected. The following parts of the objective function are created:
• nonconvex = False
$E_{invest} \cdot c_{invest,var}$
• nonconvex = True
$\begin{split}E_{invest} \cdot c_{invest,var} + c_{invest,fix} \cdot b_{invest}\\\end{split}$
The total value of all investment costs of all InvestmentStorages can be retrieved calling om.GenericInvestmentStorageBlock.investment_costs.expr().
List of Variables
symbol attribute explanation
$$P_i(t)$$ flow[i[n], n, t] Inflow of the storage
$$P_o(t)$$ flow[n, o[n], t] Outlfow of the storage
$$E(t)$$ capacity[n, t] Actual storage content (absolute storage level)
$$E_{invest}$$ invest[n, t] Invested (nominal) capacity of the storage
$$E(-1)$$ init_cap[n] Initial storage capacity (before timestep 0)
$$b_{invest}$$ invest_status[i, o] Binary variable for the status of investment
$$P_{i,invest}$$ InvestmentFlow.invest[i[n], n] Invested (nominal) inflow (Investmentflow)
$$P_{o,invest}$$ InvestmentFlow.invest[n, o[n]] Invested (nominal) outflow (Investmentflow)
List of Parameters
symbol attribute explanation
$$E_{exist}$$ flows[i, o].investment.existing Existing storage capacity
$$E_{invest,min}$$ flows[i, o].investment.minimum Minimum investment value
$$E_{invest,max}$$ flows[i, o].investment.maximum Maximum investment value
$$P_{i,exist}$$ flows[i[n], n].investment.existing Existing inflow capacity
$$P_{o,exist}$$ flows[n, o[n]].investment.existing Existing outlfow capacity
$$c_{invest,var}$$ flows[i, o].investment.ep_costs Variable investment costs
$$c_{invest,fix}$$ flows[i, o].investment.offset Fix investment costs
$$r_{cap,in}$$ invest_relation_input_capacity Relation of storage capacity and nominal inflow
$$r_{cap,out}$$ invest_relation_output_capacity Relation of storage capacity and nominal outflow
$$r_{in,out}$$ invest_relation_input_output Relation of nominal in- and outflow
$$\beta(t)$$ loss_rate[t] Fraction of lost energy as share of $$E(t)$$ per time unit
$$\gamma(t)$$ fixed_losses_relative[t] Fixed loss of energy relative to $$E_{invest} + E_{exist}$$ per time unit
$$\delta(t)$$ fixed_losses_absolute[t] Absolute fixed loss of energy per time unit
$$\eta_i(t)$$ inflow_conversion_factor[t] Conversion factor (i.e. efficiency) when storing energy
$$\eta_o(t)$$ outflow_conversion_factor[t] Conversion factor when (i.e. efficiency) taking stored energy
$$c(-1)$$ initial_storage_level Initial relativ storage content (before timestep 0)
$$c_{max}$$ flows[i, o].max[t] Normed maximum value of storage content
$$c_{min}$$ flows[i, o].min[t] Normed minimum value of storage content
$$\tau(t)$$ Duration of time step
$$t_u$$ Time unit of losses $$\beta(t)$$, $$\gamma(t)$$, $$\delta(t)$$ and timeincrement $$\tau(t)$$
CONSTRAINT_GROUP = True
class oemof.solph.components.GenericStorage(*args, max_storage_level=1, min_storage_level=0, **kwargs)[source]
Bases: oemof.network.network.Transformer
Component GenericStorage to model with basic characteristics of storages.
Parameters: nominal_storage_capacity (numeric, $$E_{nom}$$) – Absolute nominal capacity of the storage invest_relation_input_capacity (numeric or None, $$r_{cap,in}$$) – Ratio between the investment variable of the input Flow and the investment variable of the storage: $$\dot{E}_{in,invest} = E_{invest} \cdot r_{cap,in}$$ invest_relation_output_capacity (numeric or None, $$r_{cap,out}$$) – Ratio between the investment variable of the output Flow and the investment variable of the storage: $$\dot{E}_{out,invest} = E_{invest} \cdot r_{cap,out}$$ invest_relation_input_output (numeric or None, $$r_{in,out}$$) – Ratio between the investment variable of the output Flow and the investment variable of the input flow. This ratio used to fix the flow investments to each other. Values < 1 set the input flow lower than the output and > 1 will set the input flow higher than the output flow. If None no relation will be set: $$\dot{E}_{in,invest} = \dot{E}_{out,invest} \cdot r_{in,out}$$ initial_storage_level (numeric, $$c(-1)$$) – The content of the storage in the first time step of optimization. balanced (boolean) – Couple storage level of first and last time step. (Total inflow and total outflow are balanced.) loss_rate (numeric (iterable or scalar)) – The relative loss of the storage capacity per timeunit. fixed_losses_relative (numeric (iterable or scalar), $$\gamma(t)$$) – Losses independent of state of charge between two consecutive timesteps relative to nominal storage capacity. fixed_losses_absolute (numeric (iterable or scalar), $$\delta(t)$$) – Losses independent of state of charge and independent of nominal storage capacity between two consecutive timesteps. inflow_conversion_factor (numeric (iterable or scalar), $$\eta_i(t)$$) – The relative conversion factor, i.e. efficiency associated with the inflow of the storage. outflow_conversion_factor (numeric (iterable or scalar), $$\eta_o(t)$$) – see: inflow_conversion_factor min_storage_level (numeric (iterable or scalar), $$c_{min}(t)$$) – The minimum storaged energy of the storage as fraction of the nominal storage capacity (between 0 and 1). To set different values in every time step use a sequence. max_storage_level (numeric (iterable or scalar), $$c_{max}(t)$$) – see: min_storage_level investment (oemof.solph.options.Investment object) – Object indicating if a nominal_value of the flow is determined by the optimization problem. Note: This will refer all attributes to an investment variable instead of to the nominal_storage_capacity. The nominal_storage_capacity should not be set (or set to None) if an investment object is used.
Note
The following sets, variables, constraints and objective parts are created
Examples
Basic usage examples of the GenericStorage with a random selection of attributes. See the Flow class for all Flow attributes.
>>> from oemof import solph
>>> my_bus = solph.Bus('my_bus')
>>> my_storage = solph.components.GenericStorage(
... label='storage',
... nominal_storage_capacity=1000,
... inputs={my_bus: solph.Flow(nominal_value=200, variable_costs=10)},
... outputs={my_bus: solph.Flow(nominal_value=200)},
... loss_rate=0.01,
... initial_storage_level=0,
... max_storage_level = 0.9,
... inflow_conversion_factor=0.9,
... outflow_conversion_factor=0.93)
>>> my_investment_storage = solph.components.GenericStorage(
... label='storage',
... investment=solph.Investment(ep_costs=50),
... inputs={my_bus: solph.Flow()},
... outputs={my_bus: solph.Flow()},
... loss_rate=0.02,
... initial_storage_level=None,
... invest_relation_input_capacity=1/6,
... invest_relation_output_capacity=1/6,
... inflow_conversion_factor=1,
... outflow_conversion_factor=0.8)
constraint_group()[source]
class oemof.solph.components.GenericStorageBlock(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Storage without an Investment object.
The following sets are created: (-> see basic sets at Model )
STORAGES
A set with all Storage objects, which do not have an
attr:investment of type Investment.
STORAGES_BALANCED
A set of all Storage objects, with ‘balanced’ attribute set to True.
STORAGES_WITH_INVEST_FLOW_REL
A set with all Storage objects with two investment flows coupled with the ‘invest_relation_input_output’ attribute.
The following variables are created:
capacity
Capacity (level) for every storage and timestep. The value for the capacity at the beginning is set by the parameter initial_capacity or not set if initial_capacity is None. The variable of storage s and timestep t can be accessed by: om.Storage.capacity[s, t]
The following constraints are created:
Set last time step to the initial capacity if balanced == True
$E(t_{last}) = &E(-1)$
Storage balance om.Storage.balance[n, t]
$\begin{split}E(t) = &E(t-1) \cdot (1 - \beta(t)) ^{\tau(t)/(t_u)} \\ &- \gamma(t)\cdot E_{nom} \cdot {\tau(t)/(t_u)}\\ &- \delta(t) \cdot {\tau(t)/(t_u)}\\ &- \frac{\dot{E}_o(t)}{\eta_o(t)} \cdot \tau(t) + \dot{E}_i(t) \cdot \eta_i(t) \cdot \tau(t)\end{split}$
Connect the invest variables of the input and the output flow.
$\begin{split}InvestmentFlow.invest(source(n), n) + existing = \\ (InvestmentFlow.invest(n, target(n)) + existing) * \\ invest\_relation\_input\_output(n) \\ \forall n \in \textrm{INVEST\_REL\_IN\_OUT}\end{split}$
symbol explanation attribute
$$E(t)$$ energy currently stored capacity
$$E_{nom}$$ nominal capacity of the energy storage nominal_storage_capacity
$$c(-1)$$ state before initial time step initial_storage_level
$$c_{min}(t)$$ minimum allowed storage min_storage_level[t]
$$c_{max}(t)$$ maximum allowed storage max_storage_level[t]
$$\beta(t)$$ fraction of lost energy as share of $$E(t)$$ per time unit loss_rate[t]
$$\gamma(t)$$ fixed loss of energy relative to $$E_{nom}$$ per time unit fixed_losses_relative[t]
$$\delta(t)$$ absolute fixed loss of energy per time unit fixed_losses_absolute[t]
$$\dot{E}_i(t)$$ energy flowing in inputs
$$\dot{E}_o(t)$$ energy flowing out outputs
$$\eta_i(t)$$ conversion factor (i.e. efficiency) when storing energy inflow_conversion_factor[t]
$$\eta_o(t)$$ conversion factor when (i.e. efficiency) taking stored energy outflow_conversion_factor[t]
$$\tau(t)$$ duration of time step
$$t_u$$ time unit of losses $$\beta(t)$$, $$\gamma(t)$$ $$\delta(t)$$ and timeincrement $$\tau(t)$$
The following parts of the objective function are created:
Nothing added to the objective function.
CONSTRAINT_GROUP = True
class oemof.solph.components.OffsetTransformer(*args, **kwargs)[source]
Bases: oemof.network.network.Transformer
An object with one input and one output.
Parameters: coefficients (tuple) – Tuple containing the first two polynomial coefficients i.e. the y-intersection and slope of a linear equation. The tuple values can either be a scalar or a sequence with length of time horizon for simulation.
Notes
The sets, variables, constraints and objective parts are created
Examples
>>> from oemof import solph
>>> bel = solph.Bus(label='bel')
>>> bth = solph.Bus(label='bth')
>>> ostf = solph.components.OffsetTransformer(
... label='ostf',
... inputs={bel: solph.Flow(
... nominal_value=60, min=0.5, max=1.0,
... nonconvex=solph.NonConvex())},
... outputs={bth: solph.Flow()},
... coefficients=(20, 0.5))
>>> type(ostf)
<class 'oemof.solph.components.OffsetTransformer'>
constraint_group()[source]
class oemof.solph.components.OffsetTransformerBlock(*args, **kwargs)[source]
Bases: pyomo.core.base.block.SimpleBlock
Block for the relation of nodes with type OffsetTransformer
The following constraints are created:
$\begin{split}& P_{out}(t) = C_1(t) \cdot P_{in}(t) + C_0(t) \cdot Y(t) \\\end{split}$
Variables (V) and Parameters (P)
symbol attribute type explanation
$$P_{out}(t)$$ flow[n, o, t] V Power of output
$$P_{in}(t)$$ flow[i, n, t] V Power of input
$$Y(t)$$ status[i, n, t] V binary status variable of nonconvex input flow
$$C_1(t)$$ coefficients[1][n, t] P linear coefficient 1 (slope)
$$C_0(t)$$ coefficients[0][n, t] P linear coefficient 0 (y-intersection)
CONSTRAINT_GROUP = True
## oemof.solph.constraints module¶
Additional constraints to be used in an oemof energy model. This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/constraints.py
oemof.solph.constraints.emission_limit(om, flows=None, limit=None)[source]
Short handle for generic_integral_limit() with keyword=”emission_factor”.
Note
Flow objects required an attribute “emission_factor”!
oemof.solph.constraints.equate_variables(model, var1, var2, factor1=1, name=None)[source]
Adds a constraint to the given model that set two variables to equal adaptable by a factor.
The following constraints are build:
$var\textit{1} \cdot factor\textit{1} = var\textit{2}$
Parameters: var1 (pyomo.environ.Var) – First variable, to be set to equal with Var2 and multiplied with factor1. var2 (pyomo.environ.Var) – Second variable, to be set equal to (Var1 * factor1). factor1 (float) – Factor to define the proportion between the variables. name (str) – Optional name for the equation e.g. in the LP file. By default the name is: equate + string representation of var1 and var2. model (oemof.solph.Model) – Model to which the constraint is added.
Examples
The following example shows how to define a transmission line in the investment mode by connecting both investment variables. Note that the equivalent periodical costs (epc) of the line are 40. You could also add them to one line and set them to 0 for the other line.
>>> import pandas as pd
>>> from oemof import solph
>>> date_time_index = pd.date_range('1/1/2012', periods=5, freq='H')
>>> energysystem = solph.EnergySystem(timeindex=date_time_index)
>>> bel1 = solph.Bus(label='electricity1')
>>> bel2 = solph.Bus(label='electricity2')
... label='powerline_1_2',
... inputs={bel1: solph.Flow()},
... outputs={bel2: solph.Flow(
... investment=solph.Investment(ep_costs=20))}))
... label='powerline_2_1',
... inputs={bel2: solph.Flow()},
... outputs={bel1: solph.Flow(
... investment=solph.Investment(ep_costs=20))}))
>>> om = solph.Model(energysystem)
>>> line12 = energysystem.groups['powerline_1_2']
>>> line21 = energysystem.groups['powerline_2_1']
>>> solph.constraints.equate_variables(
... om,
... om.InvestmentFlow.invest[line12, bel2],
... om.InvestmentFlow.invest[line21, bel1])
oemof.solph.constraints.generic_integral_limit(om, keyword, flows=None, limit=None)[source]
Set a global limit for flows weighted by attribute called keyword. The attribute named by keyword has to be added to every flow you want to take into account.
## oemof.solph.models module¶
Solph Optimization Models
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/models.py
class oemof.solph.models.BaseModel(energysystem, **kwargs)[source]
Bases: pyomo.core.base.PyomoModel.ConcreteModel
The BaseModel for other solph-models (Model, MultiPeriodModel, etc.)
Parameters: energysystem (EnergySystem object) – Object that holds the nodes of an oemof energy system graph constraint_groups (list (optional)) – Solph looks for these groups in the given energy system and uses them to create the constraints of the optimization problem. Defaults to Model.CONSTRAINTS objective_weighting (array like (optional)) – Weights used for temporal objective function expressions. If nothing is passed timeincrement will be used which is calculated from the freq length of the energy system timeindex . auto_construct (boolean) – If this value is true, the set, variables, constraints, etc. are added, automatically when instantiating the model. For sequential model building process set this value to False and use methods _add_parent_block_sets, _add_parent_block_variables, _add_blocks, _add_objective Attributes ———– timeincrement (sequence) – Time increments. flows (dict) – Flows of the model. name (str) – Name of the model. es (solph.EnergySystem) – Energy system of the model. meta (pyomo.opt.results.results_.SolverResults or None) – Solver results. dual (… or None) rc (… or None)
CONSTRAINT_GROUPS = []
receive_duals()[source]
Method sets solver suffix to extract information about dual variables from solver. Shadow prices (duals) and reduced costs (rc) are set as attributes of the model.
relax_problem()[source]
Relaxes integer variables to reals of optimization model self.
results()[source]
Returns a nested dictionary of the results of this optimization
solve(solver='cbc', solver_io='lp', **kwargs)[source]
Takes care of communication with solver to solve the model.
Parameters: Other Parameters: solver (string) – solver to be used e.g. “glpk”,”gurobi”,”cplex” solver_io (string) – pyomo solver interface file format: “lp”,”python”,”nl”, etc. **kwargs (keyword arguments) – Possible keys can be set see below: solve_kwargs (dict) – Other arguments for the pyomo.opt.SolverFactory.solve() method Example : {“tee”:True} cmdline_options (dict) – Dictionary with command line options for solver e.g. {“mipgap”:”0.01”} results in “–mipgap 0.01” {“interior”:” “} results in “–interior” Gurobi solver takes numeric parameter values such as {“method”: 2}
class oemof.solph.models.Model(energysystem, **kwargs)[source]
An energy system model for operational and investment optimization.
Parameters: energysystem (EnergySystem object) – Object that holds the nodes of an oemof energy system graph constraint_groups (list) – Solph looks for these groups in the given energy system and uses them to create the constraints of the optimization problem. Defaults to Model.CONSTRAINTS **The following basic sets are created** NODES – A set with all nodes of the given energy system. TIMESTEPS – A set with all timesteps of the given time horizon. FLOWS – A 2 dimensional set with all flows. Index: (source, target) **The following basic variables are created** flow – Flow from source to target indexed by FLOWS, TIMESTEPS. Note: Bounds of this variable are set depending on attributes of the corresponding flow object.
CONSTRAINT_GROUPS = [<class 'oemof.solph.blocks.Bus'>, <class 'oemof.solph.blocks.Transformer'>, <class 'oemof.solph.blocks.InvestmentFlow'>, <class 'oemof.solph.blocks.Flow'>, <class 'oemof.solph.blocks.NonConvexFlow'>]
## oemof.solph.network module¶
Classes used to model energy supply systems within solph.
Classes are derived from oemof core network classes and adapted for specific optimization tasks. An energy system is modelled as a graph/network of nodes with very specific constraints on which types of nodes are allowed to be connected.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/network.py
class oemof.solph.network.Bus(*args, **kwargs)[source]
Bases: oemof.network.network.Bus
A balance object. Every node has to be connected to Bus.
Notes
The following sets, variables, constraints and objective parts are created
constraint_group()[source]
class oemof.solph.network.EnergySystem(**kwargs)[source]
Bases: oemof.network.energy_system.EnergySystem
A variant of EnergySystem specially tailored to solph.
In order to work in tandem with solph, instances of this class always use solph.GROUPINGS. If custom groupings are supplied via the groupings keyword argument, solph.GROUPINGS is prepended to those.
If you know what you are doing and want to use solph without solph.GROUPINGS, you can just use core's EnergySystem directly.
class oemof.solph.network.Flow(**kwargs)[source]
Bases: oemof.network.network.Edge
Defines a flow between two nodes.
Keyword arguments are used to set the attributes of this flow. Parameters which are handled specially are noted below. For the case where a parameter can be either a scalar or an iterable, a scalar value will be converted to a sequence containing the scalar value at every index. This sequence is then stored under the paramter’s key.
Parameters: nominal_value (numeric, $$P_{nom}$$) – The nominal value of the flow. If this value is set the corresponding optimization variable of the flow object will be bounded by this value multiplied with min(lower bound)/max(upper bound). max (numeric (iterable or scalar), $$f_{max}$$) – Normed maximum value of the flow. The flow absolute maximum will be calculated by multiplying nominal_value with max min (numeric (iterable or scalar), $$f_{min}$$) – Normed minimum value of the flow (see max). actual_value (numeric (iterable or scalar), $$f_{actual}$$) – Normed fixed value for the flow variable. Will be multiplied with the nominal_value to get the absolute value. If fixed is set to True the flow variable will be fixed to :py:actual_value * nominal_value, i.e. this value is set exogenous. positive_gradient (dict, default: :py:{'ub': None, 'costs': 0}) – A dictionary containing the following two keys: :py:'ub': numeric (iterable, scalar or None), the normed upper bound on the positive difference (:py:flow[t-1] < flow[t]) of two consecutive flow values. :py:'costs: numeric (scalar or None), the gradient cost per unit. negative_gradient (dict, default: :py:{'ub': None, 'costs': 0}) – A dictionary containing the following two keys: :py:'ub': numeric (iterable, scalar or None), the normed upper bound on the negative difference (:py:flow[t-1] > flow[t]) of two consecutive flow values. :py:'costs: numeric (scalar or None), the gradient cost per unit. summed_max (numeric, $$f_{sum,max}$$) – Specific maximum value summed over all timesteps. Will be multiplied with the nominal_value to get the absolute limit. summed_min (numeric, $$f_{sum,min}$$) – see above variable_costs (numeric (iterable or scalar)) – The costs associated with one unit of the flow. If this is set the costs will be added to the objective expression of the optimization problem. fixed (boolean) – Boolean value indicating if a flow is fixed during the optimization problem to its ex-ante set value. Used in combination with the actual_value. investment (Investment) – Object indicating if a nominal_value of the flow is determined by the optimization problem. Note: This will refer all attributes to an investment variable instead of to the nominal_value. The nominal_value should not be set (or set to None) if an investment object is used. nonconvex (NonConvex) – If a nonconvex flow object is added here, the flow constraints will be altered significantly as the mathematical model for the flow will be different, i.e. constraint etc. from NonConvexFlow will be used instead of Flow. Note: at the moment this does not work if the investment attribute is set .
Notes
The following sets, variables, constraints and objective parts are created
Examples
Creating a fixed flow object:
>>> f = Flow(actual_value=[10, 4, 4], fixed=True, variable_costs=5)
>>> f.variable_costs[2]
5
>>> f.actual_value[2]
4
Creating a flow object with time-depended lower and upper bounds:
>>> f1 = Flow(min=[0.2, 0.3], max=0.99, nominal_value=100)
>>> f1.max[1]
0.99
class oemof.solph.network.Sink(*args, **kwargs)[source]
Bases: oemof.network.network.Sink
An object with one input flow.
constraint_group()[source]
class oemof.solph.network.Source(*args, **kwargs)[source]
Bases: oemof.network.network.Source
An object with one output flow.
constraint_group()[source]
class oemof.solph.network.Transformer(*args, **kwargs)[source]
Bases: oemof.network.network.Transformer
A linear Transformer object with n inputs and n outputs.
Parameters: conversion_factors (dict) – Dictionary containing conversion factors for conversion of each flow. Keys are the connected bus objects. The dictionary values can either be a scalar or an iterable with length of time horizon for simulation.
Examples
Defining an linear transformer:
>>> from oemof import solph
>>> bgas = solph.Bus(label='natural_gas')
>>> bcoal = solph.Bus(label='hard_coal')
>>> bel = solph.Bus(label='electricity')
>>> bheat = solph.Bus(label='heat')
>>> trsf = solph.Transformer(
... label='pp_gas_1',
... inputs={bgas: solph.Flow(), bcoal: solph.Flow()},
... outputs={bel: solph.Flow(), bheat: solph.Flow()},
... conversion_factors={bel: 0.3, bheat: 0.5,
... bgas: 0.8, bcoal: 0.2})
>>> print(sorted([x[1][5] for x in trsf.conversion_factors.items()]))
[0.2, 0.3, 0.5, 0.8]
>>> type(trsf)
<class 'oemof.solph.network.Transformer'>
>>> sorted([str(i) for i in trsf.inputs])
['hard_coal', 'natural_gas']
>>> trsf_new = solph.Transformer(
... label='pp_gas_2',
... inputs={bgas: solph.Flow()},
... outputs={bel: solph.Flow(), bheat: solph.Flow()},
... conversion_factors={bel: 0.3, bheat: 0.5})
>>> trsf_new.conversion_factors[bgas][3]
1
Notes
The following sets, variables, constraints and objective parts are created
constraint_group()[source]
## oemof.solph.options module¶
Optional classes to be added to a network class. This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/options.py
class oemof.solph.options.Investment(maximum=inf, minimum=0, ep_costs=0, existing=0, nonconvex=False, offset=0)[source]
Bases: object
Parameters: maximum (float, $$P_{invest,max}$$ or $$E_{invest,max}$$) – Maximum of the additional invested capacity minimum (float, $$P_{invest,min}$$ or $$E_{invest,min}$$) – Minimum of the additional invested capacity. If nonconvex is True, minimum defines the threshold for the invested capacity. ep_costs (float, $$c_{invest,var}$$) – Equivalent periodical costs for the investment per flow capacity. existing (float, $$P_{exist}$$ or $$E_{exist}$$) – Existing / installed capacity. The invested capacity is added on top of this value. Not applicable if nonconvex is set to True. nonconvex (bool) – If True, a binary variable for the status of the investment is created. This enables additional fix investment costs (offset) independent of the invested flow capacity. Therefore, use the offset parameter. offset (float, $$c_{invest,fix}$$) – Additional fix investment costs. Only applicable if nonconvex is set to True.
For the variables, constraints and parts of the objective function, which are created, see oemof.solph.blocks.InvestmentFlow and oemof.solph.components.GenericInvestmentStorageBlock.
class oemof.solph.options.NonConvex(**kwargs)[source]
Bases: object
Parameters: startup_costs (numeric (iterable or scalar)) – Costs associated with a start of the flow (representing a unit). shutdown_costs (numeric (iterable or scalar)) – Costs associated with the shutdown of the flow (representing a unit). activity_costs (numeric (iterable or scalar)) – Costs associated with the active operation of the flow, independently from the actual output. minimum_uptime (numeric (1 or positive integer)) – Minimum time that a flow must be greater then its minimum flow after startup. Be aware that minimum up and downtimes can contradict each other and may lead to infeasible problems. minimum_downtime (numeric (1 or positive integer)) – Minimum time a flow is forced to zero after shutting down. Be aware that minimum up and downtimes can contradict each other and may to infeasible problems. maximum_startups (numeric (0 or positive integer)) – Maximum number of start-ups. maximum_shutdowns (numeric (0 or positive integer)) – Maximum number of shutdowns. initial_status (numeric (0 or 1)) – Integer value indicating the status of the flow in the first time step (0 = off, 1 = on). For minimum up and downtimes, the initial status is set for the respective values in the edge regions e.g. if a minimum uptime of four timesteps is defined, the initial status is fixed for the four first and last timesteps of the optimization period. If both, up and downtimes are defined, the initial status is set for the maximum of both e.g. for six timesteps if a minimum downtime of six timesteps is defined in addition to a four timestep minimum uptime.
max_up_down
Compute or return the _max_up_down attribute.
## oemof.solph.plumbing module¶
Plumbing stuff.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/solph/plumbing.py
oemof.solph.plumbing.sequence(iterable_or_scalar)[source]
Tests if an object is iterable (except string) or scalar and returns a the original sequence if object is an iterable and a ‘emulated’ sequence object of class _Sequence if object is a scalar or string.
Parameters: iterable_or_scalar (iterable, None, int, float)
Examples
>>> sequence([1,2])
[1, 2]
>>> x = sequence(10)
>>> x[0]
10
>>> x[10]
10
>>> print(x)
[10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10]
## oemof.solph.processing module¶
Modules for providing a convenient data structure for solph results.
Information about the possible usage is provided within the examples.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/outputlib/processing.py
oemof.solph.processing.convert_keys_to_strings(result, keep_none_type=False)[source]
Convert the dictionary keys to strings.
All (tuple) keys of the result object e.g. results[(pp1, bus1)] are converted into strings that represent the object labels e.g. results[(‘pp1’,’bus1’)].
oemof.solph.processing.create_dataframe(om)[source]
Create a result dataframe with all optimization data.
Results from Pyomo are written into pandas DataFrame where separate columns are created for the variable index e.g. for tuples of the flows and components or the timesteps.
oemof.solph.processing.get_timestep(x)[source]
Get the timestep from oemof tuples.
The timestep from tuples (n, n, int), (n, n), (n, int) and (n,) is fetched as the last element. For time-independent data (scalars) zero ist returned.
oemof.solph.processing.get_tuple(x)[source]
Get oemof tuple within iterable or create it.
Tuples from Pyomo are of type (n, n, int), (n, n) and (n, int). For single nodes n a tuple with one object (n,) is created.
oemof.solph.processing.meta_results(om, undefined=False)[source]
Fetch some meta data from the Solver. Feel free to add more keys.
Valid keys of the resulting dictionary are: ‘objective’, ‘problem’, ‘solver’.
om : oemof.solph.Model
A solved Model.
undefined : bool
By default (False) only defined keys can be found in the dictionary. Set to True to get also the undefined keys.
Returns: dict
oemof.solph.processing.parameter_as_dict(system, exclude_none=True)[source]
Create a result dictionary containing node parameters.
Results are written into a dictionary of pandas objects where a Series holds all scalar values and a dataframe all sequences for nodes and flows. The dictionary is keyed by flows (n, n) and nodes (n, None), e.g. parameter[(n, n)][‘sequences’] or parameter[(n, n)][‘scalars’].
Parameters: system (energy_system.EnergySystem) – A populated energy system. exclude_none (bool) – If True, all scalars and sequences containing None values are excluded dict (Parameters for all nodes and flows)
oemof.solph.processing.remove_timestep(x)[source]
Remove the timestep from oemof tuples.
The timestep is removed from tuples of type (n, n, int) and (n, int).
oemof.solph.processing.results(om)[source]
Create a result dictionary from the result DataFrame.
Results from Pyomo are written into a dictionary of pandas objects where a Series holds all scalar values and a dataframe all sequences for nodes and flows. The dictionary is keyed by the nodes e.g. results[idx][‘scalars’] and flows e.g. results[n, n][‘sequences’].
## oemof.solph.views module¶
Modules for providing convenient views for solph results.
Information about the possible usage is provided within the examples.
This file is part of project oemof (github.com/oemof/oemof). It’s copyrighted by the contributors recorded in the version control history of the file, available from its original location oemof/oemof/outputlib/views.py
class oemof.solph.views.NodeOption[source]
Bases: str, enum.Enum
An enumeration.
All = 'all'
HasInputs = 'has_inputs'
HasOnlyInputs = 'has_only_inputs'
HasOnlyOutputs = 'has_only_outputs'
HasOutputs = 'has_outputs'
oemof.solph.views.convert_to_multiindex(group, index_names=None, droplevel=None)[source]
Convert dict to pandas DataFrame with multiindex
Parameters: group (dict) – Sequences of the oemof.solph.Model.results dictionary index_names (arraylike) – Array with names of the MultiIndex droplevel (arraylike) – List containing levels to be dropped from the dataframe
oemof.solph.views.filter_nodes(results, option=<NodeOption.All: 'all'>, exclude_busses=False)[source]
Get set of nodes from results-dict for given node option.
This function filters nodes from results for special needs. At the moment, the following options are available:
Additionally, busses can be excluded by setting exclude_busses to True.
Parameters: results (dict) option (NodeOption) exclude_busses (bool) – If set, all bus nodes are excluded from the resulting node set. set – A set of Nodes.
oemof.solph.views.get_node_by_name(results, *names)[source]
Searches results for nodes
Names are looked up in nodes from results and either returned single node (in case only one name is given) or as list of nodes. If name is not found, None is returned.
oemof.solph.views.net_storage_flow(results, node_type)[source]
Calculates the net storage flow for storage models that have one input edge and one output edge both with flows within the domain of non-negative reals.
results: dict
A result dictionary from a solved oemof.solph.Model object
node_type: oemof.solph class
Specifies the type for which (storage) type net flows are calculated
Returns: pandas.DataFrame object with multiindex colums. Names of levels of columns are (from, to, net_flow.)
Examples
import oemof.solph as solph from oemof.outputlib import views
# solve oemof solph model ‘m’ # Then collect node weights views.net_storage_flow(m.results(), node_type=solph.GenericStorage)
oemof.solph.views.node(results, node, multiindex=False, keep_none_type=False)[source]
Obtain results for a single node e.g. a Bus or Component.
Either a node or its label string can be passed. Results are written into a dictionary which is keyed by ‘scalars’ and ‘sequences’ holding respective data in a pandas Series and DataFrame.
oemof.solph.views.node_input_by_type(results, node_type, droplevel=None)[source]
Gets all inputs for all nodes of the type node_type and returns a dataframe.
results: dict
A result dictionary from a solved oemof.solph.Model object
node_type: oemof.solph class
Specifies the type of the node for that inputs are selected
import oemof.solph as solph from oemof.outputlib import views
# solve oemof solph model ‘m’ # Then collect node weights views.node_input_by_type(m.results(), node_type=solph.Sink)
oemof.solph.views.node_output_by_type(results, node_type, droplevel=None)[source]
Gets all outputs for all nodes of the type node_type and returns a dataframe.
results: dict
A result dictionary from a solved oemof.solph.Model object
node_type: oemof.solph class
Specifies the type of the node for that outputs are selected
import oemof.solph as solph from oemof.outputlib import views
# solve oemof solph model ‘m’ # Then collect node weights views.node_output_by_type(m.results(), node_type=solph.Transformer)
oemof.solph.views.node_weight_by_type`(results, node_type)[source]
Extracts node weights (if exist) of all components of the specified node_type.
Node weight are endogenous optimzation variables associated with the node and not the edge between two node, foxample the variable representing the storage level.
Parameters: results (dict) – A result dictionary from a solved oemof.solph.Model object node_type (oemof.solph class) – Specifies the type for which node weights should be collected
Example
from oemof.outputlib import views
# solve oemof model ‘m’ # Then collect node weights views.node_weight_by_type(m.results(), node_type=solph.GenericStorage)
|
{}
|
Student[VectorCalculus] - Maple Programming Help
Home : Support : Online Help : Education : Student Packages : Vector Calculus : Visualization Commands : Student/VectorCalculus/RadiusOfCurvature
Student[VectorCalculus]
compute the radius of curvature of a curve
Parameters
C - free or position Vector; specify the components of the curve t - (optional) name; specify the parameter of the curve options - (optional) equation(s) of the form option=value where option is one of output, circleoptions, circles, curveoptions, range, or view
Description
• The RadiusOfCurvature(C, t) calling sequence computes the radius of curvature of the curve C. This is defined to be 1/Curvature(C, t) when the curvature is not zero and infinity when the curvature is zero.
• If t is not specified, the command tries to determine a suitable variable name from the components of C. To do this, it checks all of the indeterminates of type name in the components of C and removes the ones that are determined to be constants.
If the resulting set has a single entry, this single entry is the variable name. If it has more than one entry, an error is raised.
• The options arguments primarily control plot options.
output = value, plot, or animation
This option controls the return value of the command.
– output = value returns the value of the radius of curvature. Plot options are ignored if output = value. This is the default value.
– output = plot returns a plot of the space curve and the circles. The number of circles is specified by the circles option. The RadiusOfCurvature command supports only three-dimensional Vector plots.
– output = animation returns an animation of the space curve and the circles. The number of circles of curvature is specified by the circles option.
• circleoptions = list
A list of plot options for plotting the circles. For more information on plotting options, see plot/options. The default value is [].
• circles = posint
Specifies how many circles are to be plotted or animated. The default value is 5.
• curveoptions = list
A list of plot options for plotting the space curve. For more information on plotting options, see plot/options. The default value is [].
• range = realcons..realcons
The range of the independent variable. The default value is 0..5.
• view = [realcons..realcons, realcons..realcons, realcons..realcons]
• caption = anything
A caption for the plot.
The default caption is constructed from the parameters and the command options. caption = "" disables the default caption. For more information about specifying a caption, see plot/typesetting.
Examples
> $\mathrm{with}\left(\mathrm{Student}\left[\mathrm{VectorCalculus}\right]\right):$
> $\mathrm{RadiusOfCurvature}\left(\mathrm{PositionVector}\left(\left[\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t\right]\right)\right)$
${2}$ (1)
> $\mathrm{RadiusOfCurvature}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩\right)$
${2}$ (2)
> $\mathrm{simplify}\left(\mathrm{Curvature}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩\right)\right)$
$\frac{{1}}{{2}}$ (3)
To play the following animations in this help page, right-click (Control-click, on Macintosh) the plot to display the context menu. Select Animation > Play.
> $\mathrm{RadiusOfCurvature}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩,\mathrm{output}=\mathrm{animation},\mathrm{scaling}=\mathrm{constrained}\right)$
> $\mathrm{RadiusOfCurvature}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩,\mathrm{output}=\mathrm{animation},\mathrm{circles}=20,\mathrm{scaling}=\mathrm{constrained},\mathrm{range}=0..10\right)$
The command to create the plot from the Plotting Guide is
> $\mathrm{RadiusOfCurvature}\left(⟨\mathrm{cos}\left(t\right),\mathrm{sin}\left(t\right),t⟩,\mathrm{output}=\mathrm{plot},\mathrm{circles}=3,\mathrm{range}=0..10,\mathrm{scaling}=\mathrm{constrained},\mathrm{curveoptions}=\left[\mathrm{orientation}=\left[60,270\right]\right]\right)$
|
{}
|
# How do you calculate molecular weight?
Typically we calculate a molar mass, i.e. a mass of a given quantity of molecules. The given quantity is ${N}_{A}$, $\text{Avogadro's number}$ $=$ $6.022 \times {10}^{23} \cdot m o {l}^{-} 1$.
$\text{Avogadro's number}$ of ""^1H has a mass of $1.00 \cdot g$ precisely; $\text{Avogadro's number}$ of ""^12C has a mass of $12.00 \cdot g$ precisely. If I have $\text{Avogadro's number}$, $6.022 \times {10}^{23}$, individual ""^12C^1H_4 molecules, it follows that I have $\left(4 \times 1.00 + 12.00\right) \cdot g$. Typically we would say methane has a molar mass of $16.00 \cdot g \cdot m o {l}^{-} 1$
|
{}
|
# Singular Book 2.1.26 -- computation of Hom
i1 : A = QQ[x,y,z]; i2 : M = cokernel matrix(A, {{1,2,3},{4,5,6},{7,8,9}}) o2 = cokernel | 1 2 3 | | 4 5 6 | | 7 8 9 | 3 o2 : A-module, quotient of A i3 : N = cokernel matrix{{x,y},{z,0}} o3 = cokernel | x y | | z 0 | 2 o3 : A-module, quotient of A i4 : H = Hom(M,N) o4 = subquotient (| 1 0 |, | y x 0 0 0 0 |) | 0 1 | | 0 z 0 0 0 0 | | -2 0 | | 0 0 y x 0 0 | | 0 -2 | | 0 0 0 z 0 0 | | 1 0 | | 0 0 0 0 y x | | 0 1 | | 0 0 0 0 0 z | 6 o4 : A-module, subquotient of A
H is a subquotient module. In Macaulay2, the most general form of a module is as a subquotient: a submodule of a cokernel module. For more about subquotient modules, see modules.
i5 : f = homomorphism H_{0} o5 = | 1 -2 1 | | 0 0 0 | o5 : Matrix i6 : target f === N o6 = true i7 : source f === M o7 = true i8 : matrix f o8 = | 1 -2 1 | | 0 0 0 | 2 3 o8 : Matrix A <--- A
Macaulay2 has a modulo command (it was initially introduced in the original Macaulay, in the late 1980's), but it is not needed very often. It is used internally in Macaulay2 to implement kernels of module homomorphisms.
|
{}
|
# Doing $\lim_{x\to0}\frac{\sin x - \tan x}{x^2\cdot\sin 2x}$ without L'Hopital
Without L'Hopital,
$$\lim_{x\to0}\frac{\sin x - \tan x}{x^2\cdot\sin 2x}$$
This is
$$\frac{\sin x -\frac{\sin x}{\cos x}}{x^2\cdot\sin 2x} = \frac{\frac{\sin x \cdot \cos x - \sin x}{\cos x}}{x^2\cdot\sin 2x} = \frac{\sin x\cdot \cos x - \sin x}{x^2\cdot\sin 2x\cdot \cos x}$$
Split that:
$$\frac{\sin x\cdot \cos x}{x^2\cdot\sin 2x\cdot \cos x} - \frac{\sin x}{x^2\cdot\sin 2x\cdot \cos x}$$
In the left side, we can cancel the $\cos x$ and also apply $\frac{\sin x}{x} = 1$ once:
$$\frac{1}{x\cdot\sin 2x} - \frac{\sin x}{x^2\cdot\sin 2x\cdot \cos x}$$
That was probably a bad idea, since $x \cdot \sin2x$ will definitely be $0$... But anyway, let's keep going with the right side. There, we can apply the identity $\frac{\sin x}{x} = 1$ again:
$$\frac{1}{x\cdot\sin 2x} - \frac{1}{x\cdot\sin 2x\cdot \cos x}$$
Hey, I could get rid of the $\sin 2x$ on the left side if I multiply and divide by $2x$... the same on the right side:
$$\frac{1}{2x^2} - \frac{1}{2x^2\cdot \cos x}$$
Looking pretty, but sadly that's not going anywhere. What can I do?
$$\frac{\sin x-\tan x}{x^{2}\sin 2x}=\frac{\sin x}{\sin 2x}\frac{\cos x -1}{x^{2} \cos x}=\frac{\sin x}{\sin 2x}\frac{-2\sin^{2}\frac{x}{2}}{x^{2}\cos x}$$ Now, $$\frac{\sin x}{\sin 2x}\to \frac{1}{2}$$ $$\frac{\sin^{2}\frac{x}{2}}{x^{2}}=1/4\frac{\sin^{2}\frac{x}{2}}{(x/2)^{2}}\to 1/4$$ And $\cos x \to 1$ So the limit is $-1/4$.
• How come $\frac{\sin x}{\sin 2x}\to \frac{1}{2}$? And also, I'm not sure I quite grasped the last line. I know that $\cos x \to 1$, but what does that do anyway? – Zol Tun Kul Oct 1 '15 at 8:34
• You can look at it as $\frac{1}{2} \frac{\sin x}{x} \frac{2x}{\sin 2x}$ – preferred_anon Oct 1 '15 at 8:38
• You need to know $\cos(x) \to 1$ because it appears in the fraction at the end of the first line. – preferred_anon Oct 1 '15 at 8:39
$$\lim_{x\to0}\frac{\sin x - \tan x}{x^2\cdot\sin 2x} =-\lim_{x\to0}\frac{\sin x(1-\cos x)}{2x^2\sin x\cos^2x} =-\lim_{x\to0}\frac{(1-\cos x)}{2x^2}\cdot\frac1{\lim_{x\to0}\cos^2x}$$
Now $$\lim_{x\to0}\frac{(1-\cos x)}{2x^2}=\lim_{x\to0}\frac{(1-\cos x)(1+\cos x)}{2x^2}\cdot\dfrac1{\lim_{x\to0}(1+\cos x)}=?$$
• It appears I am lacking on trigonometric properties. How did you go from $(\sin x - \tan x)$ to $\sin x(1 - \cos x)$? – Zol Tun Kul Oct 1 '15 at 8:10
• Did you just write $1/0$, disguised as $1/( \lim_{x \to 0} \sin x)$? – Najib Idrissi Oct 1 '15 at 8:10
• @ZolTunKul, $$\sin x-\tan x=\dfrac{\sin x(\cos x-1)}{\cos x}$$ – lab bhattacharjee Oct 1 '15 at 8:11
• @ZolTunKul, Please find updated answer – lab bhattacharjee Oct 1 '15 at 8:12
Notice, $$\lim_{x\to 0}\frac{\sin x-\tan x}{x^2\sin 2x}$$ $$=\lim_{x\to 0}\frac{\sin x\frac{(\cos x-1)}{\cos x}}{2x^2\sin x\cos x}$$ $$=\frac{1}{2}\lim_{x\to 0}\frac{\cos x-1}{x^2\cos^2 x}$$ $$=\frac{1}{2}\lim_{x\to 0}\frac{\cos x-1}{x^2}\cdot \lim_{x\to 0}\frac{1}{\cos^2x}$$ $$=\frac{1}{2}\lim_{x\to 0}\frac{\left(1-\frac{x^2}{2!}+O(x^2)\right)-1}{x^2}\cdot 1$$ $$=\frac{1}{2}\lim_{x\to 0}\frac{\left(-\frac{x^2}{2!}+O(x^2)\right)}{x^2}$$ $$=\frac{1}{2}\lim_{x\to 0}\left(-\frac{1}{2!}+O(1)\right)$$ $$=\frac{1}{2}\left(-\frac{1}{2}+0\right)=\color{red}{-\frac{1}{4}}$$
first :
$$\lim_{x\to 0}\frac{\sin x-\tan x}{x^3}=\frac{-1}{2}$$
now :
$$\lim_{x\to 0}\frac{\sin x-\tan x}{x^2\sin 2x}=\lim_{x\to 0}\frac{\sin x-\tan x}{x^3}.\frac{2x}{2\sin 2x}=?$$
since :
$$\lim_{x\to 0}\frac{2x}{\sin 2x}=1$$
so :
$$\lim_{x\to 0}\frac{\sin x-\tan x}{x^2\sin 2x}=\frac{-1}{2}.\frac{1}{2}=\frac{-1}{4}$$
|
{}
|
# Proof of Associativity in Boolean Algebra
I must prove the most basic associativity in boolean algebra and there is two equation to be proved:
(1) a+(b+c) = (a+b)+c (where + indicates OR). (2) a.(b.c) = (a.b).c (where . indicates AND).
I have a hint to solve this: You can prove that both sides in (1) are equal to [a+(b+c)].[(a+b)+c] (I'm pretty sure that it's coming from idempotency.).
We can use all axioms of boolean algebra: distributivity, commutativity, complements, identity elements, null elements, absorption, idempotency, a = (a')' theorem, a+a'b = a + b theorem (' indicates NOT) except De Morgan's Law. Also duality of boolean algebra for sure.
-
I assume it should be true (and known) that $a + ab = a$.
Assuming this holds, let $x = a+(b+c)$ and $y = (a+b)+c$. We want to show that $x = y$, and following the hint we reduce to showing $x = xy = y$.
I claim that $ax = a, \ bx = b, \ cx = c$, and likewise for $y$. We check for $ax$: $$ax = aa + a(b+c) = a + a(b+c) = a$$ Likewise, for $bx$: $$bx = ba + b(b+c) = ba + (bb+bc) = ba + (b+bc) = ba + b = b$$ The remaining checks are analogous.
Using these identities, you can derive that anything made up of $a,b,c,+,.$ does not change when multiplied by $x$, in particular $yx = x$: $$yx = ((a+b)+c)x = (a+b)x+cx = (ax+bx)+cx = (a+b)+c = y$$ You can use a symmetric argument to conclude that $yx = xy = x$, and hence the claim follows.
For products, you can use a similar trick. Let $x = a.(b.c)$ and $y = (a.b).c$. I claim that $x = x + y = y$. To see this, first note that $x + a = a$ (because $x+a = a+ a.(...) = a$. Secondly, $x+b = b$, because $$x+b = a.(b.c) + b = a.(b.c) + a.b + a'.b = a.(b.c+b) + a'.b = a.b + a'.b = b$$ (I hope this is legit). Likewise, $x+c = c$. Finally, $x + y = y$, because: $$y = (a.b).c = ((a+x).(b+x)).(c+x) = (a.b + x).(c+x) = (a.b).c + x = y + x$$ (I used the identity $(u+t).(v+t) = u.v + t.u + t.v + t.t = u.v +t$). The proof that $y+x = x$ is symmetric.
-
Thanks for that beatiful answer and it's totally useful. Thanks a lot again. I must ask that, did we also prove the associativity of multiplication? I mean with this process, did we prove the equation a.(b.c)=(a.b).c ? – Hazım Türkkan Mar 18 '13 at 12:17
I overlooked the part of the problem about the products, sorry about that. I just added that, see if I am making sense there. – Feanor Mar 18 '13 at 12:43
Sorry but I didn't understand a point; here is this: I think (a.b+x).(c+x) = (a.b).(c+x) => by x's property and then (a.b).c + (a.b).x Because (a.b).c + x is destroying parentheses of (c+x). Can you explain it? – Hazım Türkkan Mar 18 '13 at 14:42
Sorry for my rush, I understood it now it came from (a.b+x).(c+x)=(a.b).c + (a.b).x + x.c + x.x and then idempotent law and two absorptions. Thank you so much that's the final and perfect answer. You are the best. – Hazım Türkkan Mar 18 '13 at 14:54
|
{}
|
# Functions and the Intermediate Value theorem
1. Nov 17, 2009
### dancergirlie
1. The problem statement, all variables and given/known data
Let f : [0; 1] --> R be continuous on [0, 1], and assume that the range of f is contained
in [0; 1]. Prove that there exists an x in [0, 1] satisfying f(x) = x.
2. Relevant equations
3. The attempt at a solution
Well i am almost positive I need to use the intermediate value theorem.
First I could claim that either f(0)>x>f(1) or f(0)<x<f(1). where x is a value in (0,1)
Not too sure what to do, i think the key is something to do with the range of f being contained in [0,1], but any help would be great!
2. Nov 17, 2009
### HallsofIvy
Staff Emeritus
If f(0)= 0, we are done. If f(1)= 1, we are done. So we can assume that $f(0)\ne 0$ and that $f(1)\ne 1$. But f(0) must be in [0, 1]. If it is not equal to 0, then f(0)> 0. Similarly, we must have f(1)< 1. Define H(x)= f(x)- x. What is H(0)? What is H(1)? Apply the intermediate value theorem to H(x).
3. Nov 17, 2009
### dancergirlie
Thanks so much for the help!
|
{}
|
#### Archived
This topic is now archived and is closed to further replies.
# Making one field affect another (Win32)
This topic is 5045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
I''ve looked all over the place and can''t really find any good info/tutes on this. What I have is two radio buttons and another text field (which is grayed out by default). Once one of the radio buttons is selected, I need the text field to become available to accept input. How would I go about doing that? I''m using MSC++ 6 if that helps.
##### Share on other sites
I assume you''re just doing plain win32 code? Just intercept the WM_COMMAND message (like you''d do for any button) then check for the button''s ID (as per normal again), then you get the button''s state with IsDlgButtonChecked(), then (IIRC) you can use the WM_ENABLE message to enable/disable the text boxes (can anyone else confirm the use of WM_ENABLE + wParam = true for enabled, false for disabled):
int wndproc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam){ switch(message) { case WM_COMMAND: switch(LOWORD(wParam)) { case IDC_MY_RADIO_BUTTON: { bool checked = IsDlgButtonChecked(hWnd, IDC_MY_RADIO_BUTTON) == BST_CHECKED; SendDlgItemMessage(hWnd, IDC_MY_RADIO_BUTTON, WM_ENABLE, checked, 0); } break; // other stuff }}
##### Share on other sites
I assume I have to replace the IDC_MYRADIOBUTTON handle with my text field handle in the SendDlgItemMessage function? (assuming that was an error in the first place)
Either way, it didn''t work for some reason. I also found out about the EnableWindow() function (supposedly I can just typecast HWND onto my textfield) but that isnt working either.
##### Share on other sites
yep - sorry the sendmessage should have been to the text field - it''s a bit of a pain enabling the windows as far as I remember. I think I''ve had some millage using getting the style bits with lStyleBits = GetWindowLong(hWnd, GWL_STYLE), clearing or setting the WS_DISABLED bit then using SetWindowLong(hWnd, GWL_STYLE, lStyleBits);
##### Share on other sites
...where obviously hwnd is the handle of the thing to enable/disable
• ### Forum Statistics
• Total Topics
628696
• Total Posts
2984264
• 18
• 9
• 13
• 13
• 11
|
{}
|
# Maximize sum of reciprocals vs Minimize sums
Will the returned result of the function
$$\max\{\tfrac{1}{a}+\tfrac{1}{f}, \tfrac{1}{b}+\tfrac{1}{e}, \tfrac{1}{c}+\tfrac{1}{d}\}$$
return the same set $\{a,f\}$, $\{b,e\}$ or $\{c,d\}$ as the function
$$\min\{a+f, b+e, c+d\}\quad?$$
Assume all numbers are positive and real-valued.
In other words, if, for example, $$a+f < b+e\quad\text{ and }\quad a+f < c+d,$$ will it be true that $$\tfrac{1}{a}+\tfrac{1}{f} > \tfrac{1}{b}+\tfrac{1}{e}\quad\text{ and }\quad \tfrac{1}{a}+\tfrac{1}{f} > \tfrac{1}{c}+\tfrac{1}{d}\quad ?$$
-
Do you mean $\text{argmin}$ i.e. the arguments $a,b,\ldots$ that minimize the corresponding sums? – Listing Nov 30 '11 at 9:13
No. For example, let $a=c=d=f=1$, $b=\frac{1}{2}$, and $e=2$. Then $$\max\{\tfrac{1}{1}+\tfrac{1}{1},\tfrac{1}{2}+\tfrac{1}{\frac{1}{2}},\tfrac{1}{1}+\tfrac{1}{1}\}=\max\{2,\tfrac{5}{2},2\}=\tfrac{5}{2}$$ returns $\{b,e\}$, but $$\min\{1+1,2+\tfrac{1}{2},1+1\}=\min\{2,\tfrac{5}{2},2\}=2$$ returns either $\{a,f\}$ or $\{c,d\}$ (take your pick).
-
Wow, I really screwed up the question. Editing it right now to reflect what I really meant. Your have the right answer for the wrong question (stupid me). – Tabgok Nov 30 '11 at 9:13
@Tabgok: No problem, happens to everyone :) I've now edited my answer to reflect your new question, if I've understood it correctly. – Zev Chonoles Nov 30 '11 at 9:24
Thank you so much! Spent a long time trying to figure this out analytically - looked for a counter-example but everything I tried worked. – Tabgok Nov 30 '11 at 9:43
|
{}
|
### A New Trapdoor over Module-NTRU Lattice and its Application to ID-based Encryption
Jung Hee Cheon, Duhyeong Kim, Taechan Kim, and Yongha Son
##### Abstract
A trapdoor over NTRU lattice proposed by Ducas, Lyubashevsky and Prest~(ASIACRYPT 2014) has been widely used in various crytographic primitives such as identity-based encryption~(IBE) and digital signature, due to its high efficiency compared to previous lattice trapdoors. However, the most of applications use this trapdoor with the power-of-two cyclotomic rings, and hence to obtain higher security level one should double the ring dimension which results in a huge loss of efficiency. In this paper, we give a new way to overcome this problem by introducing a generalized notion of NTRU lattices which we call \emph{Module-NTRU}~(MNTRU) lattices, and show how to efficiently generate a trapdoor over MNTRU lattices. Moreover, beyond giving parameter flexibility, we further show that the Gram-Schmidt norm of the trapdoor can be reached to about $q^{1/d},$ where MNTRU covers $d \ge 2$ cases while including NTRU as $d = 2$ case. Since the efficiency of trapdoor-based IBE is closely related to the Gram-Schmidt norm of trapdoor, our trapdoor over MNTRU lattice brings more efficient IBE scheme than the previously best one of Ducas, Lyubashevsky and Prest, while providing the same security level.
Available format(s)
Category
Public-key cryptography
Publication info
Preprint. Minor revision.
Keywords
SIS trapdoorModule-NTRU latticeIdentity-based encryption
Contact author(s)
jhcheon @ snu ac kr
doodoo1204 @ snu ac kr
taechan kim ym @ hco ntt co jp
emsskk @ snu ac kr
History
Short URL
https://ia.cr/2019/1468
CC BY
BibTeX
@misc{cryptoeprint:2019/1468,
author = {Jung Hee Cheon and Duhyeong Kim and Taechan Kim and Yongha Son},
title = {A New Trapdoor over Module-NTRU Lattice and its Application to ID-based Encryption},
howpublished = {Cryptology ePrint Archive, Paper 2019/1468},
year = {2019},
note = {\url{https://eprint.iacr.org/2019/1468}},
url = {https://eprint.iacr.org/2019/1468}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.
|
{}
|
# Triple Integrals: Spherical Coordinates - Finding the Bounds for ρ
1. Jun 10, 2012
### theBEAST
1. The problem statement, all variables and given/known data
Find the volume of the solid that lies above the cone z = root(x2 + y2) and below the sphere x2 + y2 + x2 = z.
2. Relevant equations
x2 + y2 + x2 = ρ2
3. The attempt at a solution
The main issue I have with this question is finding what the boundary of integration is for ρ. I tried to solve for it by:
https://dl.dropbox.com/u/64325990/Photobook/Photo%202012-06-10%207%2041%2030%20PM.jpg [Broken]
I end up getting 0 ≤ ρ ≤ root(2)sinΦ.
However the answer says the 0 ≤ ρ ≤ cosΦ, what am I doing wrong?
Last edited by a moderator: May 6, 2017
2. Jun 10, 2012
### LCKurtz
The outer surface is the sphere $x^2+y^2+z^2=z$. Writing that in spherical coordinates gives $\rho^2=\rho \cos\phi$. Dividing by $\rho$ gives $\rho = \cos\phi$. So $\rho$ goes from $0$ to $\cos\phi$. You get the cone with an appropriate limits for $\phi$.
Last edited by a moderator: May 6, 2017
3. Jun 10, 2012
### theBEAST
Oh wow that makes so much sense now! Thanks!!! DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
|
{}
|
# Smallest Graph that is Regular but not Vertex-Transitive?
I'm trying to find the smallest graph that is regular but not vertex-transitive, where by smallest I mean "least number of vertices", and if two graphs have the same number of vertices, then the smaller is the one with the lower number of edges.
I currently have that the smallest such graph is the disjoint union of the three-cycle and the four-cycle.
Are there any smaller graphs?
## 1 Answer
Note that a graph is vertex transitive if and only if its complement is, and a regular graph on at most six vertices is the complement of a regular graph with valency at most two. It is easy to write down all graphs with valency at most two on at most six vertices.
• Is that easy? There are many graphs on at most six vertices. Even to eliminate all the ones with valency $> 2$ would still leave me checking on the order of $2^{15}$ graphs. Is there a particular technique you have in mind? – Newb Apr 21 '14 at 12:00
• You only want regular graphs on at most six vertices with max valency two. There are not very many. If these are all vertex transitive, then all regular graphs with at most six vertices are vertex transitive. – Chris Godsil Apr 21 '14 at 12:07
• Okay, I sketched it out. The regular graphs on $n$ vertices with valency $2$ are all just cycles, which are vertex-transitive. Is that correct? This leads me to believe that I've found the minimal example (seeing as a regular graph on 7 vertices with fewer than 7 edges is not possible). – Newb Apr 21 '14 at 12:12
• @Newb Yes. A regular graph with valency $0$ is trivially transitive. Regular with valency $1$ is always vertex transitive (it's just a bunch of isolated edges). Regular with valency $2$ means we have disjoint cycles; as each must have length $\ge3$, to even have more than one cycle (which is always transitive) we need $n\ge 6$; with $n=6$ we could have two $3$-cycles, still transitive; with $n=7$, we have your example. – Hagen von Eitzen Apr 21 '14 at 12:27
• @HagenvonEitzen As I thought. Thanks for making it clear! – Newb Apr 21 '14 at 12:28
|
{}
|
# Error when manipulating files: zipfile.BadZipFile: File is not a zip file
When writing code, the recorder pursues decoupling, and wants the program to see if the path file exists. The OS module is usually used to operate the file or path. If it does not exist, the file is created under the current path for subsequent operations. As far as I know, the OS module does not give a method to create a file, which requires us to complete the “road safety first” through another way. At that time, we used the general with open to create a file. OK, no problem. The creation was successful. The next problem is the error of zipfile.badzipfile: file is not a zip file during data storage, It’s said that the file that is not zip is rushing to work. Suddenly, it’s a flash of inspiration. When the pandas module is used, it’s just that the file can be operated. In the end, the overall code has not changed. The only change is to attach the code at the step of creating the file
def isfile(self, file):
print(file)
if not os.path.isfile(file):
df = pd.DataFrame(columns=['name', 'shake number', 'weibo', 'profile', 'estimated sales']) # Create a table object, without creating the content first
df.to_excel(self.filr, index=False) # Save the object as an .xlsx file
return file
Call the function to execute retuan and return the path URL for subsequent operation
# [Solved] Win10 and Linux address reading format is different (CV2. Error: opencv (4.2.0)/Io/opencv…)
img = cv2.resize(img, (512, 1024))
cv2.error: OpenCV(4.2.0) /io/opencv/modules/imgproc/src/resize. cpp:4045 : error: (-215:Assertion failed) ! ssize.empty() in function ‘resize’
The code can run normally in win10, but the above error is reported in Linux. My analysis is due to the different address reading formats between win10 and Linux.
img = cv2.imread(item['path'])
#item['path']:'./data/TuSimple/LaneDetection\clips/0313-2/42120/20.jpg'
There is a ‘\’ in the address of item [‘path ‘], which results in an error in Linux operation
so change the’ \ ‘in the address of item [‘path’] to ‘/’
item['path']=item['path'].replace('\\', '/')#Add this code
# How to Solve Python Importerror: DLL load failed: unable to find the specified program using tensorflow
preface
There are various problems encountered during the use of TensorFlow. It is helpful to write them down for review and future learning
Problem description
When TensorFlow is installed in Anaconda, the following problem is encountered:
>>> import tensorflow
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow\python\__init__.py", line 59, in <module>
from tensorflow.core.framework.graph_pb2 import *
File "D:\Anaconda\envs\dl\lib\site-packages\tensorflow\core\framework\graph_pb2.py", line 6, in <module>
from google.protobuf import descriptor as _descriptor
File "D:\Anaconda\envs\dl\lib\site-packages\google\protobuf\descriptor.py", line 47, in <module>
ImportError: DLL load failed: The specified program could not be found.
The solution
Protobuf was upgraded yesterday when Object-Detection was installed, so if you call back the version of Protobuf, you should be fine.
pip install protobuf==3.6.0
# Servlet.service() for servlet [dispatcherServlet] in context && Whitelabel Error Page
ERROR 8040 – O.A.C.C.C. [nio – 8080 – exec – 1]. [[. [/] [dispatcherServlet] : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.NullPointerException] with root cause
to find for a long time, found and @autowired didn’t add question
# Solution to “[dbnetlib] [connectionwrite (send()).] general network error”
Recently, I need to use Excel to generate a series of charts, the data is naturally obtained through sql server. The problem is that this excel has nearly 50 charts, and each chart has to be connected to DB to get the data. The problem comes, when refreshing all the time often encountered
“[DBNETLIB] [ConnectionWrite (send()). General network error. Check your network documentation” error. The initial judgment is definitely too much data connection caused by (for Excel rookies do not have the ability to solve from the Excel side). After a while Google, finally found a solution, maybe not the best, right as a record.
Possible causes.
This problem occurs because Windows Server 2003 and higher implements a security feature that reduces the size of the queue of concurrent TCP/IP connections to the server. This feature helps prevent denial-of-service attacks. Under high load conditions, the TCP/IP protocol may incorrectly recognize a valid TCP/IP connection as a denial-of-service attack. This behavior can lead to the problems described in the Symptoms section.
Solution.
This section, method, or task contains steps that tell you how to modify the registry. However, serious problems can occur if the registry is not modified correctly. Therefore, make sure you follow these steps carefully. For extra protection, back up the registry before modifying it. Then, if a problem occurs, you can restore the registry.
To resolve this issue, turn off this new feature by adding SynAttackProtect entry to the following registry entry for the computer that is running Microsoft SQL Server, which houses your BizTalk server database.
HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Tcpip \ \ Service Parameters
Settings
SynAttackProtect
Enter a DWORD value of 00000000. to do this, follow these steps:
Click Start, click Run, type regedit, and then click OK. Find and click on the following registry entry:
HKEY_LOCAL_MACHINE \ SYSTEM \ CurrentControlSet \ Tcpip \ \ Service Parameters
On the Edit menu, point to New, then click on the DWORD value. Type SynAttackProtect, and then press ENTER. Click Modify on the Edit menu. In the Value Data box, type 00000000. click OK. exit the Registry Editor.
Note To complete this registry change, you must restart the computer running SQL Server.
# MYSQL 5.7 Error Code: 1290. The MySQL server is running with the –secure-file-priv option so it..
When exporting data with MySQL 5.7, an error was reported. The error was reported as follows:
Error Code: 1290. The MySQL server is running with the --secure-file-priv option so it cannot execute this statement
According to the wrong information, we found that secure-file-priv will specify the folder as the place where the exported files are stored, so we can find this folder first.
Solution 1:
enter the following command in the MySQL command line interface:
show variables like '%secure%';
Annotated is the correct file path, we will export files in this directory.
For SQL instructions, modify as follows:
SELECT * FROM User details WHERE gender='male'
INTO OUTFILE 'C:\\ProgramData\\MySQL\\MySQL Server 5.7\\Uploads\\man.txt'
The file can be successfully exported to this directory.
Solution 2:
Go to the installation path C:\ProgramData\MySQL\MySQL Server 5.7, find my. Ini file and modify the default save path of Secure-file-priv.
Secure_file_prive =null — mysqld does not allow import and export
secure_file_priv=/ TMP/– limits the import and export of mysqld to only occur in the/TMP/directory
secure_file_priv= “” — does not restrict the import and export of mysqld
# [Android Error] java.lang.RuntimeException: An error occurred while executing doInBackground()
Recently, a bug was added to the task list to be resolved in this sprint. The stack information of the bug is as follows:
Fatal Exception: java.lang.RuntimeException: An error occurred while executing doInBackground()
at android.os.AsyncTask$3.done(AsyncTask.java:353) at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:383) at java.util.concurrent.FutureTask.setException(FutureTask.java:252) at java.util.concurrent.FutureTask.run(FutureTask.java:271) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636)
Caused by java.lang.SecurityException: Caller no longer running, last stopped +25s437ms because: timed out while starting
at android.app.job.IJobCallback$Stub$Proxy.dequeueWork(IJobCallback.java:191)
at android.app.job.JobParameters.dequeueWork(JobParameters.java:196)
at android.support.v4.app.JobIntentService$JobServiceEngineImpl.dequeueWork(JobIntentService.java:309) at android.support.v4.app.JobIntentService.dequeueWork(JobIntentService.java:627) at android.support.v4.app.JobIntentService$CommandProcessor.doInBackground(JobIntentService.java:384)
at android.support.v4.app.JobIntentService$CommandProcessor.doInBackground(JobIntentService.java:377) at android.os.AsyncTask$2.call(AsyncTask.java:333)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:636) at java.lang.Thread.run(Thread.java:764) According to the above bug information, it can be known that the system JobIntentService, AsyncTask doInBackground is called, while doInBackground calls dequeueWork. The following is the source code (source code of androidx 1.1.0) : final class CommandProcessor extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... params) { GenericWorkItem work; if (DEBUG) Log.d(TAG, "Starting to dequeue work..."); while ((work = dequeueWork()) != null) { if (DEBUG) Log.d(TAG, "Processing next work: " + work); onHandleWork(work.getIntent()); if (DEBUG) Log.d(TAG, "Completing work: " + work); work.complete(); } if (DEBUG) Log.d(TAG, "Done processing work!"); return null; } dequeueWork() source code is as follows, let’s focus on mJobImpl! = null part, will enter mjobimpl.dequeuework () part: GenericWorkItem dequeueWork() { if (mJobImpl != null) { return mJobImpl.dequeueWork(); } else { synchronized (mCompatQueue) { if (mCompatQueue.size() > 0) { return mCompatQueue.remove(0); } else { return null; } } } } mJobImpl is actually a , CompatJobEngine, source code and is the implementation class JobServiceEngineImpl as follows: interface CompatJobEngine { IBinder compatGetBinder(); GenericWorkItem dequeueWork(); } @RequiresApi(26) static final class JobServiceEngineImpl extends JobServiceEngine implements JobIntentService.CompatJobEngine { @Override public JobIntentService.GenericWorkItem dequeueWork() { JobWorkItem work; synchronized (mLock) { if (mParams == null) { return null; } work = mParams.dequeueWork(); } if (work != null) { work.getIntent().setExtrasClassLoader(mService.getClassLoader()); return new WrapperWorkItem(work); } else { return null; } } } As you can see from the bug information at the beginning of the article, it goes to mparams.dequeuework (); Binder, then enter the Binder mechanism, the source code is as follows, so we can conclude that there is a problem here, throwing an exception, but because this is part of the source code, it should not be our responsibility. public @Nullable JobWorkItem dequeueWork() { try { return getCallback().dequeueWork(getJobId()); } catch (RemoteException e) { throw e.rethrowFromSystemServer(); } } /** @hide */ @UnsupportedAppUsage public IJobCallback getCallback() { return IJobCallback.Stub.asInterface(callback); } After query source, found that the problem appeared in the framework layer, and there are already online issue of the problem: https://github.com/evernote/android-job/issues/255 https://issuetracker.google.com/issues/63622293 online encounter this kind of problem a lot of a lot of people, but so far, I have checked the latest Google androidx library ("androidx.core:core-ktx:1.2.0-rc01") and still haven’t solved this problem. App . In this package, a new class SafeJobIntentService</code b> is inserted into the JobIntentService. The reason for this is that the dequeueWork() method is not public. We have to write in the same package to override its methods and fix bugs. @RestrictTo({Scope.LIBRARY}) public abstract class SafeJobIntentService extends JobIntentService { public SafeJobIntentService() { } GenericWorkItem dequeueWork() { try { return super.dequeueWork();//1 Here we do a try/catch operation on this method } catch (SecurityException var2) { var2.printStackTrace(); return null; } } public void onCreate() { super.onCreate(); if (VERSION.SDK_INT >= 26) { this.mJobImpl = new SafeJobServiceEngineImpl(this); } else { this.mJobImpl = null; } } } @RequiresApi(26) public class SafeJobServiceEngineImpl extends JobServiceEngine implements CompatJobEngine { static final String TAG = "JobServiceEngineImpl"; static final boolean DEBUG = false; final JobIntentService mService; final Object mLock = new Object(); JobParameters mParams; SafeJobServiceEngineImpl(JobIntentService service) { super(service); this.mService = service; } public IBinder compatGetBinder() { return this.getBinder(); } public boolean onStartJob(JobParameters params) { this.mParams = params; this.mService.ensureProcessorRunningLocked(false); return true; } public boolean onStopJob(JobParameters params) { boolean result = this.mService.doStopCurrentWork(); synchronized(this.mLock) { this.mParams = null; return result; } } public GenericWorkItem dequeueWork() { JobWorkItem work = null; synchronized(this.mLock) { if (this.mParams == null) { return null; } try { work = this.mParams.dequeueWork(); } catch (SecurityException var5) { var5.printStackTrace(); } } if (work != null) { work.getIntent().setExtrasClassLoader(this.mService.getClassLoader()); return new SafeJobServiceEngineImpl.WrapperWorkItem(work); } else { return null; } } final class WrapperWorkItem implements GenericWorkItem { final JobWorkItem mJobWork; WrapperWorkItem(JobWorkItem jobWork) { this.mJobWork = jobWork; } public Intent getIntent() { return this.mJobWork.getIntent(); } public void complete() { synchronized(SafeJobServiceEngineImpl.this.mLock) { if (SafeJobServiceEngineImpl.this.mParams != null) { try { SafeJobServiceEngineImpl.this.mParams.completeWork(this.mJobWork); } catch (SecurityException | IllegalArgumentException var4) { // 2 Here we also perform a try/catch operation on the completeWork var4.printStackTrace(); } } } } } } On the basis of the source code, the above code only handles Exception at 1 and 2</code b>. The rest of the code does not change, so we can compare the source code to see the comparison. If you have a three-party library in your project that has introduced this SafeJobIntentService class, but because you can't use this class of them, and you refer to such as implementation 'com.evernote:android-job:1.4.2' library, duplicate class found in the module. If this problem occurs, we can rename the class and follow the above code to deal with it. Hopefully Google will add a solution to this problem in future libraries. JSON has three methods for parsing data # Solution to latex “too many unprocessed floats” error This error occurred because more than 18 graphs and tables were placed in a row without any text in between.These solutions are available online: 1. Using macro package \usepackage[section]{placeins} 2. Start using \ ClearPage on each pageBut after the author used the above method, although the error is not reported, but the layout of the picture is still a bit messy. In fact, this error is caused by the continuous placement of too many floating graphics. The author to check the LaTex books (http://www.ctex.org/documents/latex/graphics/node2.html), 20 according to the book. Do not float the figure, remove the figure environment (that is, add pictures as non-floating graphics), the problem is solved. Code used by the author: \centerline{\includegraphics[width=12cm]{fig1}} \caption{fig1}\label{fig1} %\vspace{5mm} \centerline{\includegraphics[width=12cm]{fig2}} \caption{fig2}\label{fig2} %\vspace{5mm} \centerline{\includegraphics[width=12cm]{fig3}} \caption{fig3}\label{fig3} . . . %\vspace{5mm} \centerline{\includegraphics[width=12cm]{fig20}} \caption{fig20}\label{fig20} where vspace{5mm} can be adjusted for image spacing. Update: If you must use the Figure environment, you can combine \ ClearPage with! H parameters are used together, such as: \begin{figure}[!h] \centerline{\includegraphics[width=12cm]{fig17}} \caption{Experimental results of the 17th image frame}\label{fig17} \end{figure} \clearpage \begin{figure}[!h] \centerline{\includegraphics[width=12cm]{fig18}} \caption{Experimental results of the 18th image frame}\label{fig18} \end{figure} \ ClearPage where it was at the end of the previous page. # [Solved] void value not ignored as it ought to be Error “void value not ignored as it ought to be” appears in GCC, The reason is that you are using a function whose return type is void, and you have assigned it. Such as: int ret; ret=unregister_chrdev(MAJOR_NUM,”globalvar”); # Error handling response: Error: Syntax error, unrecognized expression: .c-container /deep/ .c-contai The following error message appears on the browser console: Error handling response: Error: Syntax error, unrecognized expression: .c-container /deep/ .c-container at Function.se.error () at se.tokenize () at se.select () at Function.se [as find] () at S.fn.init.find () at new S.fn.init () ... After checking, it is found that the plug-in affects the global $. After deleting the plug-in in the browser (you need to confirm which plug-in it is), you can solve the error
Maven plugin error execution default descriptor of goal org. Apache. Maven plugins:maven-plugin-plugin :3.2:descriptor failed
The above error occurred when writing Maven plug-in.
Solution
Display the version number of the specified Maven plugin plugin in POM. XML
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.5.2</version>
</plugin>
</plugins>
</build>
other error
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-plugin-plugin:3.2:descriptor (default-descriptor) on project maven-project: Error extracting plugin descriptor: ‘No mojo definitions were found for plugin
How to Solve
Show the version number of the specified maven-plugin-plugin in pom.xml
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-plugin-plugin</artifactId>
<version>3.5.2</version>
<configuration>
<!-- Or add a descriptor to the mojo class comment -->
<skipErrorNoDescriptorsFound>true</skipErrorNoDescriptorsFound>
</configuration>
</plugin>
</plugins>
</build>
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:testCompile (default-testCompile) on project xxx: Fatal error compiling: basedir D:\xxx\target\generated-test-sources\test-annotations does not exist -> [Help 1]
Solution
Skip the test during installation
mvn install -DskipTests=true
# Error:Cannot build artifact xxx:war exploded’ because it is included into a circular dependency
Error:Cannot build artifact xxx:war exploded’ because it is included into a circular dependency solution
IDEA error: Error:Cannot build artifact xxx:war exploded’ because it is included into a circular dependency
How to Solve:
ctrl + alt + shift + s Open project structure (or ctrl alt + a to search for project structure)
Click on the left artifacts and delete the two extra ones, which are
xxx:warxxx:war exploded
Delete is OK
|
{}
|
# AP® Statistics
Free Version
Easy
APSTAT-FZKLLN
A random variable is normally distributed with $\mu = 25$ and $\sigma = 5$.
What are the new mean and standard deviation if you multiply each value in the underlying population by $2$ and then add $10$?
A
$\mu = 50; \sigma = 5$
B
$\mu = 50; \sigma= 20$
C
$\mu = 60; \sigma = \sqrt{60}$
D
$\mu = 60; \sigma = 10$
E
$\mu = 60; \sigma = 20$
|
{}
|
• ### Description of longitudinal profiles of showers dominated by Cherenkov light(1709.01458)
Sept. 5, 2017 astro-ph.HE
With the aim to describe the longitudinal development of Cherenkov dominated showers we investigate the energy deposit and the number of charged particles in air showers induced by energetic cosmic rays. Based on the Monte Carlo simulations, discrepancies between different estimates of calorimetric energies are documented. We focus on the energy deposit profiles of air showers deducible from the fluorescence and Cherenkov light generated along CONEX and CORSIKA cascades.
• ### Mass Composition of Cosmic Rays with Combined Surface Detector Arrays(1708.06164)
Aug. 21, 2017 astro-ph.HE
Our study exploits the Constant Intensity Cut principles applied simultaneously to muonic and electromagnetic detectors of cosmic rays. We use the fact that the ordering of events according to their signal sizes induced in different types of surface detectors provides information about the mass composition of primary cosmic-ray beam, with low sensitivity to details of hadronic interactions. Composition analysis at knee energies is performed using Monte Carlo simulations for extensive air showers having maxima located far away from a hypothetical observatory. Another type of a hypothetical observatory is adopted to examine composition of ultra-high energy primaries which initiate vertical air showers with maxima observed near surface detectors.
• ### A Bayesian on-off analysis of cosmic ray data(1707.03155)
July 11, 2017 astro-ph.IM
We deal with the analysis of on-off measurements designed for the confirmation of a weak source of events whose presence is hypothesized, based on former observations. The problem of a small number of source events that are masked by an imprecisely known background is addressed from a Bayesian point of view. We examine three closely related variables, the posterior distributions of which carry relevant information about various aspects of the investigated phenomena. This information is utilized for predictions of further observations, given actual data. Backed by details of detection, we propose how to quantify disparities between different measurements. The usefulness of the Bayesian inference is demonstrated on examples taken from cosmic ray physics.
• We measure the energy emitted by extensive air showers in the form of radio emission in the frequency range from 30 to 80 MHz. Exploiting the accurate energy scale of the Pierre Auger Observatory, we obtain a radiation energy of 15.8 \pm 0.7 (stat) \pm 6.7 (sys) MeV for cosmic rays with an energy of 1 EeV arriving perpendicularly to a geomagnetic field of 0.24 G, scaling quadratically with the cosmic-ray energy. A comparison with predictions from state-of-the-art first-principle calculations shows agreement with our measurement. The radiation energy provides direct access to the calorimetric energy in the electromagnetic cascade of extensive air showers. Comparison with our result thus allows the direct calibration of any cosmic-ray radio detector against the well-established energy scale of the Pierre Auger Observatory.
• ### On Bayesian analysis of on-off measurements(1603.03386)
March 10, 2016 astro-ph.IM
We propose an analytical solution to the on-off problem within the framework of Bayesian statistics. Both the statistical significance for the discovery of new phenomena and credible intervals on model parameters are presented in a consistent way. We use a large enough family of prior distributions of relevant parameters. The proposed analysis is designed to provide Bayesian solutions that can be used for any number of observed on-off events, including zero. The procedure is checked using Monte Carlo simulations. The usefulness of the method is demonstrated on examples from gamma-ray astronomy.
• ### Maximum entropy analysis of cosmic ray composition(1512.09248)
Jan. 1, 2016 astro-ph.HE
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the superposition model. We present two examples that demonstrate what consequences can be drawn for energy dependent changes in the primary composition.
• ### Variability of VHE $\gamma$-ray emission from the binary PSR B1259-63/LS 2883(1512.04849)
Dec. 15, 2015 astro-ph.HE
We examine changes of the $\gamma$-ray intensity observed from the direction of the binary system PSR B1259-63/LS 2883 during campaigns around its three periastron passages. A simple and straightforward method is applied to the published data obtained with the Imaging Atmospheric Cherenkov Technique. Regardless of many issues of the detection process, the method works only with numbers of very high energetic photons registered in the specified regions. Within the realm of this scheme, we recognized changes attributable to the variations of the intrinsic source activity at high levels of significance.
• Neutrinos in the cosmic ray flux with energies near 1 EeV and above are detectable with the Surface Detector array of the Pierre Auger Observatory. We report here on searches through Auger data from 1 January 2004 until 20 June 2013. No neutrino candidates were found, yielding a limit to the diffuse flux of ultra-high energy neutrinos that challenges the Waxman-Bahcall bound predictions. Neutrino identification is attempted using the broad time-structure of the signals expected in the SD stations, and is efficiently done for neutrinos of all flavors interacting in the atmosphere at large zenith angles, as well as for "Earth-skimming" neutrino interactions in the case of tau neutrinos. In this paper the searches for downward-going neutrinos in the zenith angle bins $60^\circ-75^\circ$ and $75^\circ-90^\circ$ as well as for upward-going neutrinos, are combined to give a single limit. The $90\%$ C.L. single-flavor limit to the diffuse flux of ultra-high energy neutrinos with an $E^{-2}$ spectrum in the energy range $1.0 \times 10^{17}$ eV - $2.5 \times 10^{19}$ eV is $E_\nu^2 dN_\nu/dE_\nu < 6.4 \times 10^{-9}~ {\rm GeV~ cm^{-2}~ s^{-1}~ sr^{-1}}$.
• A measurement of the cosmic-ray spectrum for energies exceeding $4{\times}10^{18}$ eV is presented, which is based on the analysis of showers with zenith angles greater than $60^{\circ}$ detected with the Pierre Auger Observatory between 1 January 2004 and 31 December 2013. The measured spectrum confirms a flux suppression at the highest energies. Above $5.3{\times}10^{18}$ eV, the "ankle", the flux can be described by a power law $E^{-\gamma}$ with index $\gamma=2.70 \pm 0.02 \,\text{(stat)} \pm 0.1\,\text{(sys)}$ followed by a smooth suppression region. For the energy ($E_\text{s}$) at which the spectral flux has fallen to one-half of its extrapolated value in the absence of suppression, we find $E_\text{s}=(5.12\pm0.25\,\text{(stat)}^{+1.0}_{-1.2}\,\text{(sys)}){\times}10^{19}$ eV.
• ### Significance for signal changes in gamma-ray astronomy(1509.00353)
Sept. 1, 2015 astro-ph.IM, astro-ph.HE
We describe a straightforward modification of frequently invoked methods for the determination of the statistical significance of a gamma-ray signal observed in a counting process. A simple criterion is proposed to decide whether a set of measurements of the numbers of photons registered in the source and background regions is consistent with the assumption of a constant source activity. This method is particularly suitable for immediate evaluation of the stability of the observed gamma-ray signal. It is independent of the exposure estimates, reducing thus the impact of systematic inaccuracies, and properly accounts for the fluctuations in the number of detected photons. The usefulness of the method is demonstrated on several examples. We discuss intensity changes for gamma-ray emitters detected at very high energies by the current gamma-ray telescopes (e.g. 1ES 0229+200, 1ES 1959+650 and PG 1553+113). Some of the measurements are quantified to be exceptional with large statistical significances.
• ### A branching model for hadronic air showers(1509.00364)
Sept. 1, 2015 astro-ph.HE
We introduce a simple branching model for the development of hadronic showers in the Earth's atmosphere. Based on this model, we show how the size of the pionic component followed by muons can be estimated. Several aspects of the subsequent muonic component are also discussed. We focus on the energy evolution of the muon production depth. We also estimate the impact of the primary particle mass on the size of the hadronic component. Even though a precise calculation of the development of air showers must be left to complex Monte Carlo simulations, the proposed model can reveal qualitative insight into the air shower physics.
• ### Unexpected gamma-ray signal in the vicinity of 1ES 0229+200(1509.00333)
Sept. 1, 2015 astro-ph.HE
We report on an unidentified gamma-ray signal found in the region around the BL Lac object 1ES 0229+200. It was recognized serendipitously in our analysis of 6.2 years of Fermi-LAT data at a distance less than 3{\deg} away from the blazar. The observed excess of counts manifests itself as an unexpected local maximum in the test statistic map. Although several Fermi-LAT sources have been identified in this area we were not able to link them to the position of this residual signal. A clear association with sources visible in other wavebands was not successful either. We briefly discuss characteristics of this unresolved phenomenon. Our results suggest a steep energy spectrum and a point-like nature of this candidate gamma-ray emitter.
• ### Study of Dispersion of Mass Distribution of Ultra-High Energy Cosmic Rays using a Surface Array of Muon and Electromagnetic Detectors(1503.07734)
March 26, 2015 astro-ph.IM, astro-ph.HE
We consider a hypothetical observatory of ultra-high energy cosmic rays consisting of two surface detector arrays that measure independently electromagnetic and muon signals induced by air showers. Using the constant intensity cut method, sets of events ordered according to each of both signal sizes are compared giving the number of matched events. Based on its dependence on the zenith angle, a parameter sensitive to the dispersion of the distribution of the logarithmic mass of cosmic rays is introduced. The results obtained using two post-LHC models of hadronic interactions are very similar and indicate a weak dependence on details of these interactions.
• ### Variability of VHE $\gamma$-ray sources(1412.2050)
Dec. 5, 2014 astro-ph.HE
We study changes in the $\gamma$-ray intensity at very high energies observed from selected active galactic nuclei. Publicly available data collected by Cherenkov telescopes were examined by means of a simple method utilizing solely the number of source and background events. Our results point to some degree of time variability in signal observed from the investigated sources. Several measurements were found to be excessive or deficient in the number of source events when compared to the source intensity deduced from other observations.
• ### Time variability of the $\gamma$-ray binary HESS J0632+057(1409.5395)
Sept. 18, 2014 astro-ph.HE
We study changes in the $\gamma$--ray intensity at very high energies observed from the $\gamma$--ray binary HESS J0632+057. Publicly available data collected by Cherenkov telescopes were examined by means of a simple method utilizing solely the number of source and background events. Our results point to time variability in signal from the selected object consistent with periodic modulation of the source intensity.
• ### Testing time variability of gamma-ray flux(1309.6476)
Sept. 25, 2013 astro-ph.IM, astro-ph.HE
A way of examining a hypothetical non--zero $\gamma$--ray signal for the time changes is presented. The time variability of the recently observed $\gamma$--ray source PKS 2155--304 is discussed. Several measurements were found to be excessive or deficient with large significances on time scales of months and days.
• Contributions of the Pierre Auger Collaboration to the 33rd International Cosmic Ray Conference, Rio de Janeiro, Brazil, July 2013
• ### New method for atmospheric calibration at the Pierre Auger Observatory using FRAM, a robotic astronomical telescope(0706.1710)
June 12, 2007 astro-ph
FRAM - F/(Ph)otometric Robotic Atmospheric Monitor is the latest addition to the atmospheric monitoring instruments of the Pierre Auger Observatory. An optical telescope equipped with CCD camera and photometer, it automatically observes a set of selected standard stars and a calibrated terrestrial source. Primarily, the wavelength dependence of the attenuation is derived and the comparison between its vertical values (for stars) and horizontal values (for the terrestrial source) is made. Further, the integral vertical aerosol optical depth can be obtained. A secondary program of the instrument, the detection of optical counterparts of gamma-ray bursts, has already proven successful. The hardware setup, software system, data taking procedures, and first analysis results are described in this paper.
• ### The bright optical flash from GRB 060117(astro-ph/0606004)
June 2, 2006 astro-ph
We present a discovery and observation of an extraordinarily bright prompt optical emission of the GRB 060117 obtained by a wide-field camera atop the robotic telescope FRAM of the Pierre Auger Observatory from 2 to 10 minutes after the GRB. We found rapid average temporal flux decay of alpha = -1.7 +- 0.1 and a peak brightness R = 10.1 mag. Later observations by other instruments set a strong limit on the optical and radio transient fluxes, unveiling an unexpectedly rapid further decay. We present an interpretation featuring a relatively steep electron-distribution parameter p ~ 3.0 and providing a straightforward solution for the overall fast decay of this optical transient as a transition between reverse and forward shock.
|
{}
|
Select Page
The following data set represents the time (in minutes) for a random sample of phone calls made by employees at a company:
(b) Find the sample standard deviation.
(c) Use the t-distribution to construct a 90% confidence interval for the population mean and interpret the results. Assume the population of the data set is normally distributed.
(d) Repeat part (c) assuming that. Compare results.
Solution:
(b) The sample standard deviation is computed as:
Also, using Excel we find that
(c) The 90% confidence interval for the population mean is given by:
wherecorresponds to the tow-tailed cutoff point of t-distribution, for, and 9 degrees of freedom, which means that
This means that
This means that there’s a 90% chance that the interval (4.463051, 8.603616) contains the actual population mean.
(d) Now we assume that the population standard deviation is known, and is equal to. The 90% confidence interval is this case is equal to:
This means that there’s a 90% chance that the interval (4.712649, 8.354011) contains the actual population mean. This means we have a better estimate for, which makes sense, given that we have more information (the population variance is known).
GO TO NEXT PROBLEM >>
|
{}
|
## COCI '21 Contest 1 #2 Kamenčići
View as PDF
Points: 10 (partial)
Time limit: 1.0s
Memory limit: 512M
Problem type
This summer, Antun and Branka stumbled upon a very interesting beach, which was completely covered with plastic 'pebbles' brought by the sea from the containers that fell from the cargo ships. They decided to take back with them of these pebbles, some red and some blue. Now that autumn has arrived, they are playing with the pebbles and reminiscing about the warm summer days.
Their game proceeds as follows: in the beginning, they place the pebbles in a row. Then, Antun and Branka make moves in turn, each time removing one of the pebbles from one of the ends of the row, until someone obtains red pebbles, losing the game. Antun moves first and is wondering whether he could win regardless of the moves Branka makes. Please help him and write a program which will answer his question.
110
220
340
#### Input Specification
The first line contains two integers, and .
The second line contains a sequence of characters C or P, where C denotes a red pebble, and P denotes a blue pebble. The character C appears at least times.
#### Output Specification
If Antun can win regardless of Branka's moves, you should print DA; otherwise, print NE.
#### Sample Input 1
4 1
CCCP
#### Sample Output 1
DA
#### Sample Input 2
8 2
PCPPCCCC
#### Sample Output 2
DA
#### Explanation for Sample Output 2
Antun can take a blue pebble from the left (CPPCCCC). Then, Branka has to take a red pebble.
If she takes a pebble from the left (PPCCCC), Antun will take the first, and Branka the second blue pebble on the left, after which only red pebbles remain and Branka will lose.
If she takes a pebble from the right (CPPCCC), Antun can take another pebble from the right and then Branka will again have to take another red pebble and lose.
#### Sample Input 3
9 1
PPCPPCPPC
#### Sample Output 3
NE
|
{}
|
# Independence
Author: mathforces
Problem has been solved: 10 times
Русский язык | English Language
21 buildings of different heights stand in a row along Independence Avenue. Jessy looked out the window of the last building towards the other buldings and counted $N$ buildings that she could see. If the taller buildings obstruct the lower ones, what is $[10X]$, where $X$ is the expected value of $N$.
|
{}
|
# MATLAB: Matlab how can I make this inequality
I have an exercise that requires me to plot a function of y=x^2 from 0<=x<pi/2 what would be the right approach for this?
• There are different ways to display a graph as you are asking, one of the most basic ones being
1. define the base vector
step=0.01x=[0:step:pi/2]
2. define the function
y=x.^2;
3. display
plot(x,y)grid on
|
{}
|
1. ## l'Hopital's Rule
limit cosh x -1
x -> 0 ----------
(x^2)
on this problem i did l'hopital twice and got -inf.
answer is wrong, can someone explain?
2. Hello, viet!
. . . .cosh x -1
lim . ----------- . . . . This goes to 0/0
x→0 . . .
Apply L'Hopital:
. . . .sinh x
lim . -------- . . . . This goes to 0/0
x→0 . .2x
Apply L'Hopital again:
. . . .cosh x
lim . -------- . . . . This goes to ½
x→0 . . 2
|
{}
|
# Order of magnitude of the hitting time of a random walk
Consider the random walk on $\mathbb R$ with $X_0 = a >0$ and $$X_{n+1} = X_n + U_n,$$ where $U_0, U_1, U_2,\ldots$ is an i.i.d. sequence of uniform random numbers in $[-1,1]$.
How does the hitting time to $(-\infty,0]$; i.e. $$\tau_a:=\min\{n : X_n\leq 0\}$$ behave for large $a$?
In need to prove $\tau_a$ is of order $a^2$; i.e. for any $\epsilon>0$ there are constants $C$ and $C'$ such that for large enough $a$ we have
$$\mathbb P[Ca^2<\tau_a<C'a^2]>1-\epsilon.$$
• The hitting time has a heavy tail. If it goes the wrong way in the first $a^2$ time steps, then you can expect to wait $a^2$ steps more. So your $r_a$ should be the same size as your $m_a$. That is to say, it only really makes sense to look for bounds of the form $\mathbb P(\tau_a > m_a) < \epsilon$. – Anthony Quas Dec 18 '15 at 0:59
• @AnthonyQuas Shouldn't the bound be of the form $\mathbb{P}(\tau_a \le m_a)<\epsilon$? The series bounding the other tail is divergent. – S.B. Dec 18 '15 at 4:17
• @AnthonyQuas Can't understand your point. Since $\tau_a<\infty$ a.s., $\tau_a$ has some unknown distribution on the natural numbers. The numbers $m_a$ and $r_a$ exist anyway. – Ali Khezeli Dec 18 '15 at 5:09
We assume that $U$ is centered, square integrable and we denote by $\sigma^2>0$ its variance. Given $a\geq 0$, I denote by $\tau_a$ the hitting time of $[a,+\infty)$ by $X$ starting from $X_0=0$ (this formulation is of course equivalent but more natural when using the following method). Fix $\varepsilon>0$ and let us prove that there exists $C,C'$ such that, for all $a$ large enough, \begin{align*} \mathbb{P}(C a^2< \tau_a \leq C' a^2+1)\geq 1-\varepsilon. \end{align*} We will use Donsker invariance principle, although there is no need to carefully check the law of hitting times for the Brownian motion, nore to quantify the speed of convergence to the Brownian motion.
Let $B$ be a standard one dimensional Brownian motion and choose $C>0$ and $C'>C$ such that \begin{align*} \mathbb{P}(C < T_{1/\sigma}\leq C')\geq 1-\varepsilon/3, \end{align*} where $T_{1/\sigma}$ is the hitting time of $1/\sigma$ by the Brownian motion $B$.
Let $(\varphi_k)_{k\in\mathbb{N}}$ (resp. $(\psi_k)_{k\in\mathbb{N}}$) be an increasing (resp. bounded decreasing) sequence of continuous functions converging pointwisely to $\mathbf{1}_{\cdot < 1/\sigma}$. We define the continous function $f_k$ and $g_k$ on $C([0,\infty[)$ (with the topology defined p. 60 of Karatzas-Shreve) by \begin{align*} f_k(\omega)=\varphi_k(\max_{t\in[0,C]} \omega_t) \text{ and }g_k(\omega)=\psi_k(\max_{t\in[0,C']} \omega_t). \end{align*} We thus have, almost surely, \begin{align*} \mathbf{1}_{C < T_{1/\sigma}\leq C'}=\lim_{k\rightarrow\infty} f_k(B)-g_k(B). \end{align*} Hence, by the dominated convergence theorem, we can choose $k_0$ such that \begin{align*} \mathbb{E}(f_{k_0}(B)-g_{k_0}(B))\geq 1-2\varepsilon/3. \end{align*}
For any $n\in\mathbb{N}$, let us define the affine process starting from $0$ and such that \begin{align*} X_t^{(n)}=\frac{1}{\sigma \sqrt{n}}Y_{nt},\text{ with } Y_t=\sum_{n=1}^{\lfloor t\rfloor}U_n+(t-\lfloor t\rfloor)U_{\lfloor t\rfloor+1}. \end{align*} Denoting by $T^{(n)}_{1/\sigma}$ the first hitting time of $1/\sigma$ by $X^{(n)}$, it is clear that $a^2 T^{(a^2)}_{1/\sigma}\leq \tau_a < a^2 T^{(a^2)}_{1/\sigma}+1$. Hence \begin{align*} \mathbb{P}(C a^2< \tau_a \leq C' a^2+1)&\geq \mathbb{P}(C a^2< a^2 T^{(a^2)}_{1/\sigma}\leq C' a^2)\\ &= \mathbb{P}(C < T^{(a^2)}_{1/\sigma}\leq C')\\ &\geq \mathbb{E}(f_{k_0}(X^{a^2})-g_{k_0}(X^{a^2})) \end{align*} We know that the law of $(X_t^{(n)})_{t\geq 0}$ converges weakly to the Brownian motion on $C([0,\infty))$ when $n\rightarrow\infty$ (see for instance Theorem~4.20 p.71 in Karatzas-Shreve), hence \begin{align*} \mathbb{E}(f_{k_0}(X^{a^2})-g_{k_0}(X^{a^2}))\xrightarrow[a\rightarrow\infty]{} \mathbb{E}(f_{k_0}(B)-g_{k_0}(B))\geq 1-2\varepsilon/3. \end{align*} As a consequence, there exists $a_0$ such that, for all $a\geq a_0$, \begin{align*} \mathbb{P}(C a^2< \tau_a \leq C' a^2+1)\geq 1-\varepsilon. \end{align*}
After rescaling (the variance is $1/3$ instead of $1$), the random walk approaches Brownian motion. The first hitting time of $c$ for Brownian motion follows a Lévy distribution ($\textrm{Levy}(0,c^2)$). The cumulative distribution function is known explicitly as are the asymptotics. Note that $c = \sqrt{3} a$.
The probability that the hitting time is greater than $t$ is the same as the probability that a Brownian motion has not reached $0$ from $c/\sqrt{t}$ at time $1$, which is $2\Phi(c/\sqrt{t})-1$ by reflection. When $c/\sqrt{t}$ is small, this is approximately $2\phi(0)(c/\sqrt{t}) = \sqrt{\frac{2 c^2}{\pi t}}$. So, to reduce the chance of not hitting $0$ by time $t$ to some small $\varepsilon$, you need $t$ to be greater than about $2 c^2/(\pi \varepsilon^2)$ or $6a^2/(\pi \varepsilon^2)$.
• Same works for a lower bound on the hitting time. But we need an approximation of the error between the random walk and a Brownian motion. Correct? – Ali Khezeli Dec 18 '15 at 13:43
• Yes, this heuristic alone isn't rigorous. There are other arguments that it suggests, though. I'll try to post some of those later. – Douglas Zare Dec 22 '15 at 19:57
• The approximation you are looking for can be done using a version of an "almost sure invariance principle". ASIP basically says that you can enlarge your probability space and define there the random walk and a Brownian motion in a way that $S_{[nt]}-W_t$ is controlled almost surely. For the example you are mentioning, you can apply the Komlos-Major-Tsunady approximation. I think though that a direct proof without Brownian motion is easier. – user78465 Feb 22 '16 at 20:39
Here is another solution, hopefully easier to follow that the one I provided earlier (but unfortunately not shorter). It is less general, since it uses the fact that the random variables $U_i$ are symmetric and uniformly bounded by $1$. I hope it helps!
We denote by $\tau_a$ the hitting time of $a$ by $X$ starting from $0$. This formulation is of course equivalent to yours but is more natural with the following method, which mimics the proof of the reflection principle for Brownian motions.
First step (proof of a kind of reflection principle for random walks): we prove that, for all $n\geq 0$, $a\geq 0$, \begin{align*} \mathbb{P}(\tau_a\leq n)=\mathbb{P}(X_n > a)+\mathbb{P}(X_n\geq 2X_{\tau_a}-a). \end{align*} Indeed, for all $b\leq a$, \begin{align*} \mathbb{P}(\tau_a\leq n,\ X_n\leq b)&=\mathbb{P}(\tau_a\leq n,\ X_n-X_{\tau_a}\leq b-X_{\tau_a}). \end{align*} But, conditionally to $\tau_a\leq n$ and by the strong Markov property, $X_n-X_{\tau_a}$ is independent of both $\tau_a$ and $X_{\tau_a}$ and it has the same law as $X_{\tau_a}-X_n$ by symmetry of the $U_i$, hence \begin{align*} \mathbb{P}(\tau_a\leq n,\ X_n\leq b)&=\mathbb{P}(\tau_a\leq n,\ X_{\tau_a}-X_n\leq b-X_{\tau_a})\\ &=\mathbb{P}(\tau_a\leq n,\ X_{\tau_a}-X_n\leq b-X_{\tau_a})\\ &=\mathbb{P}(\tau_a\leq n,\ X_n\geq 2X_{\tau_a}- b)=\mathbb{P}(X_n\geq 2X_{\tau_a}- b), \end{align*} since $2X_{\tau_a}-b\geq 2 a-b\geq a$. Now \begin{align*} \mathbb{P}(\tau_a\leq n)&=\mathbb{P}(\tau_a\leq n,\ X_n>a)+\mathbb{P}(\tau_a\leq n,\ X_n\leq a)\\ &=\mathbb{P}(X_n>a)+\mathbb{P}(X_n\geq 2X_{\tau_a}-a). \end{align*}
Second step (conclusion using the CLT): Let $Y$ be a centred normalized Gaussian variable. We deduce from the first step and from the fact that $X_{\tau_a}\in[a,a+1]$ almost surely that \begin{align*} 2\mathbb{P}(X_n\geq a+2)\leq \mathbb{P}(\tau_a\leq n)\leq 2\mathbb{P}(X_n\geq a) \end{align*} and hence that \begin{align*} \mathbb{P}(|X_n|\geq a+2)\leq \mathbb{P}(\tau_a\leq n)\leq \mathbb{P}(|X_n|\geq a). \end{align*} Hence, for all $C'>0$, \begin{align*} \mathbb{P}(\tau_a\leq C' a^2)&\geq \mathbb{P}\left(\frac{|X_{C'a^2}|}{a\sqrt{C'}}\geq \frac{a+2}{a\sqrt{C'}}\right)\\ &\xrightarrow[a\rightarrow\infty]{} \mathbb{P}\left(|Y|\geq \frac{1}{\sqrt{C'}}\right) \end{align*} and \begin{align*} \mathbb{P}(\tau_a> C a^2)&\geq 1-\mathbb{P}\left(\frac{|X_{Ca^2}|}{a\sqrt{C}}\geq \frac{1}{\sqrt{C}}\right)\\ &\xrightarrow[a\rightarrow\infty]{} 1-\mathbb{P}\left(|Y|\geq \frac{1}{\sqrt{C}}\right). \end{align*} Choosing $C>0$ small enough and $C'>0$ big enough, we conclude that, for all $a$ large enough, \begin{align*} \mathbb{P}(C a^2<\tau_a\leq C' a^2)&\geq 1-\varepsilon. \end{align*}
Lemma 4.18 of the book Brownian Motion and Stochastic Calculus implies a lower bound on $\tau_a$ for arbitrary random walks with mean zero and a given variance. It results that there is a $\delta>0$ such that $\mathbb P[\tau_a <\delta a^2]<\epsilon$.
|
{}
|
# Help getting Ant installed and to run
Davy Kelly
Ranch Hand
Posts: 384
Hi everyone,
I am going through the Agile Java book by Jeff Langr, and the book says to install Ant.
I have downloaded a zip file for my windows machine, and extracted the zip file to a directory called ant in C:
C:\ant
C:\ant\apache-ant-1.6.5
I have appended my classpath to include C:\ant\apache-ant-1.6.5\lib\ant.jar;
then I appended my path with C:\ant\apache-ant-1.6.5\bin;
I then made a new variable and called it ANT_HOME and set the environment as C:\ant
But when I try to use ant, by typing at the command line:
C:\ant I get a:
how can I make the Ant see the tool.jar in the C:\Program Files\Java\jdk1.5.0_03\lib not the C:\Program Files\Java\jre1.5.0_04\lib\tools.jar
Davy
[ August 12, 2005: Message edited by: Davy Kelly ]
Tim Holloway
Bartender
Posts: 18408
58
Actually, ANT_HOME should be defined as the root directory of the unzipped Ant. That is, "C:\ant\apache-ant-1.6.5".
I don't know offhand if Ant uses the JAVA_HOME environment variable or not. A lot of Java apps do, but not all of them.
Davy Kelly
Ranch Hand
Posts: 384
sorry, I forgot to add in the apache-ant-1.6.5 as the ANT_HOME, in my above post.
I double checked and I do have this variable set to what you suggested, but it still not working.
davy
[ August 12, 2005: Message edited by: Davy Kelly ]
Davy Kelly
Ranch Hand
Posts: 384
I eventually done it, My JAVA_HOME was not pointing to anything.
Davy
|
{}
|
## Contents
### Overview
One important functionality of Radx2Grid is the ability to distinguish between convective stratiform and separation precipitation in gridded radar data. The algorithm is a modified version of the process described by Steiner et al. (1995), which analyzes the radar reflectivity field and flags grid points as convective or stratiform precipitation. Distinguishing between the two precipitation types is important due to the distinct profiles of vertical velocity, microphysical processes, and diabatic heating in convective and stratiform precipitation. This page will describe the basic methodology and point the user to key parameters.
### Separation Process
The default setting of Radx2Grid is to not perform the convective stratiform separation; performing the separation and saving the results to the gridded file must be enabled in the Radx2Grid parameter file (identify_convective_stratiform_split; line 2428).
The convective stratiform algorithm categorizes radar echoes in two ways:
1. Intensity: high-intensity reflectivity values are also certainly found only in convective precipitation
1. Texture: convective precipitation exhibits greater horizontal variability than stratiform precipitation
As in Steiner et al. (1995), the first step identifies definite convection. This process is done by flagging all points that exceed a user-defined reflectivity threshold as convective (conv_strat_dbz_threshold_for_definite_convection; line 2491). Note that this threshold will vary in continental and tropical convection (e.g., 53 vs 40/45 dBZ). For each point flagged as definite convection, all points within the radius of convective influence are also flagged as convection (conv_strat_convective_radius_km; line 2504).
While Steiner et al. (1995) next identifies any remaining convection by calculating the reflectivity difference between a point and its neighbors, Radx2Grid instead analyzes the "texture" of the reflectivity field, which is defined as $\displaystyle{ \sqrt{\sigma(dBZ^2)} }$. The texture is calculated using all points within the user-defined texture radius of the central point (conv_strat_texture_radius_km; line 2519) and is only valid if a sufficient fraction of the grid points within the texture radius have good data (conv_strat_min_valid_fraction_for_texture; line 2533). All locations where the texture exceeds a user-defined threshold are also defined as convection (conv_strat_min_texture_for_convection; line 2547). Similar to the first step, all points within the radius of convective influence are also flagged as convection.
Although Steiner et al. (1995) performs the aforementioned analysis at a single horizontal level, Radx2Grid uses a vertical layer of a user-defined depth to determine convective and stratiform precipitation (conv_strat_min_valid_height and con_strat_max_valid_height; lines 2452, 2464).
Note: the algorithm is currently undergoing significant upgrades that will be included in a future version of LROSE, possibly in a standalone application instead of within Radx2Grid.
### Example
Example output from Hurricane Harvey (2017) from the Houston radar (KHGX) is shown below. In this example, the definite convective threshold was 45 dBZ, the convective radius is 5 km, the minimum texture is 15 dBZ, and the texture radius is 7 km. Red regions indicate convective precipitation.
### References
Steiner, M., Houze , R. A., Jr., & Yuter, S. E. (1995). Climatological Characterization of Three-Dimensional Storm Structure from Operational Radar and Rain Gauge Data, Journal of Applied Meteorology and Climatology, 34(9), 1978-2007. Link
|
{}
|
rate2by2.test {epitools} R Documentation
## Comparative tests of independence in rx2 rate tables
### Description
Tests for independence where each row of the rx2 table is compared to the exposure reference level and test of independence two-sided p values are calculated using mid-p xxact, and normal approximation.
### Usage
rate2by2.test(x, y = NULL, rr = 1,
rev = c("neither", "rows", "columns", "both"))
### Arguments
x input data can be one of the following: r x 2 table where first column contains disease counts and second column contains person time at risk; or a single numeric vector for counts followed by person time at risk y vector of person-time at risk; if provided, x must be a vector of disease counts rr rate ratio reference value (default is no association) rev reverse order of "rows", "colums", "both", or "neither" (default)
### Details
Tests for independence where each row of the rx2 table is compared to the exposure reference level and test of independence two-sided p values are calculated using mid-p xxact, and normal approximation.
This function expects the following table struture:
counts person-time
exposed=0 (ref) n00 t01
exposed=1 n10 t11
exposed=2 n20 t21
exposed=3 n30 t31
The reason for this is because each level of exposure is compared to the reference level.
If the table you want to provide to this function is not in the preferred form, just use the rev option to "reverse" the rows, columns, or both. If you are providing categorical variables (factors or character vectors), the first level of the "exposure" variable is treated as the reference. However, you can set the reference of a factor using the relevel function.
Likewise, each row of the rx2 table is compared to the exposure reference level and test of independence two-sided p values are calculated using mid-p exact method and normal approximation.
This function can be used to construct a p value function by testing the MUE to the null hypothesis (rr=1) and alternative hypotheses (rr not equal to 1) to calculate two-side mid-p exact p values. For more detail, see Rothman.
### Value
x table that was used in analysis p.value p value for test of independence
### Author(s)
Tomas Aragon, aragon@berkeley.edu, http://www.phdata.science
### References
Kenneth J. Rothman and Sander Greenland (2008), Modern Epidemiology, Lippincott Williams and Wilkins Publishers
Kenneth J. Rothman (2002), Epidemiology: An Introduction, Oxford University Press
rateratio,
### Examples
##Examples from Rothman 1998, p. 238
bc <- c(Unexposed = 15, Exposed = 41)
pyears <- c(Unexposed = 19017, Exposed = 28010)
dd <- matrix(c(41,15,28010,19017),2,2)
dimnames(dd) <- list(Exposure=c("Yes","No"), Outcome=c("BC","PYears"))
##midp
rate2by2.test(bc,pyears)
rate2by2.test(dd, rev = "r")
rate2by2.test(matrix(c(15, 41, 19017, 28010),2,2))
rate2by2.test(c(15, 41, 19017, 28010))
[Package epitools version 0.5-10.1 Index]
|
{}
|
[texhax] Adding an object into a MikTeX file
Edsko de Vries edsko at edsko.net
Sun Nov 11 16:45:07 CET 2007
On Fri, Nov 09, 2007 at 05:08:42PM +0000, G. Vlasakakis wrote:
>
> Hi,
>
> Could you pls provide me with some assistance about how i can insert a
> .jbeg, .jpg or .bmp into a MikTeX file? I need to insert some pictures and
> images found online to my report so i was thinking of firstly saving them
> as image files.
>
> thanks a lot in advance,
\begin{document}
\usepackage{graphicx}
\begin{document}
\includegraphics[width=0.5\textwidth]{picture.jpg}
\end{document}
then use pdflatex to compile.
Edsko
|
{}
|
# Terminology about Abelian varieties over finite fields
Is there a standard meaning for ordinary and supersingular Abelian varieties over finite fields? If so, where can I find it (together with basic properties about them)?
-
## 1 Answer
See http://en.wikipedia.org/wiki/Hasse-Witt_matrix#Abelian_varieties_and_their_p-rank . "Ordinary" is always defined by p-rank equal to the dimension (the maximum possible). The article gives one definition of "supersingular", but that usage may not be universal.
-
The article you mentioned has a definition of supersingular but not of ordinary, right? – expmat May 11 '12 at 18:31
My answer has a definition of ordinary but not of supersingular, on the other hand. – Charles Matthews May 11 '12 at 19:07
|
{}
|
# t.test returns an error “data are essentially constant”
R version 3.1.1 (2014-07-10) -- "Sock it to Me"
> bl <- c(140, 138, 150, 148, 135)
> fu <- c(138, 136, 148, 146, 133)
> t.test(fu, bl, alternative = "two.sided", paired = TRUE)
Error in t.test.default(fu, bl, alternative = "two.sided", paired = TRUE) :
data are essentially constant
Then I change just a single character in my fu dataset:
> fu <- c(138, 136, 148, 146, 132)
and it runs...
> t.test(fu, bl, alternative = "two.sided", paired = TRUE)
Paired t-test
What am I missing here?
-
Type bl-fu. Now sd(bl-fu). If it's not obvious, yet, do these: dif=bl-fu then n=length(dif) then mean(dif)/(sd(dif)/sqrt(n))... do you see now? – Glen_b Aug 23 at 5:35
whoops, thanks :) agree with me that the error message could have been more newbie-friendly. So this means that as far as statistics go, there's no need for fancy t.test and its a certainty that for each subject there would be a -2 reduction in the fu compared to the bl? – ihadanny Aug 23 at 5:46
As covered in comments, the issue was that the differences were all 2 (or -2, depending on which way around you write the pairs).
Responding to the question in comments:
So this means that as far as statistics go, there's no need for fancy t.test and its a certainty that for each subject there would be a -2 reduction in the fu compared to the bl?
Well, that depends.
If the distribution of differences really was normal, that would be the conclusion, but it might be that the normality assumption is wrong and the distribution of differences in measurements is actually discrete (maybe in the population you wish to make inference about it's usually -2 but occasionally different from -2).
In fact, seeing that all the numbers are integers, it seems like discreteness is probably the case.
... in which case there's no such certainty that all differences will be -2 in the population -- it's more that there's a lack of evidence in the sample of a difference in the population means any different from -2.
(For example, if 87% of the population differences were -2, there's only a 50-50 chance that any of the 5 sample differences would be anything other than -2. So the sample is quite consistent with there being variation from -2 in the population)
But you would also be led to question the suitability of the assumptions for the t-test -- especially in such a small sample.
-
they are blood pressures in mmHg in a baseline and followup checks, so I'm pretty relaxed about assuming normality and of course non-discreteness. It was just an exercise that showed me how much more powerful is paired-t-test (when available) over non-paired. – ihadanny Aug 23 at 6:29
|
{}
|
# autolens.Grid2DIterate¶
class autolens.Grid2DIterate(shape, dtype=float, buffer=None, offset=0, strides=None, order=None)
__init__(*args, **kwargs)
Initialize self. See help(type(self)) for accurate signature.
Methods
all([axis, out, keepdims]) Returns True if all elements evaluate to True. any([axis, out, keepdims]) Returns True if any of the elements of a evaluate to True. argmax([axis, out]) Return indices of the maximum values along the given axis. argmin([axis, out]) Return indices of the minimum values along the given axis of a. argpartition(kth[, axis, kind, order]) Returns the indices that would partition this array. argsort([axis, kind, order]) Returns the indices that would sort this array. array_at_sub_size_from(func, cls, mask, sub_size) astype(dtype[, order, casting, subok, copy]) Copy of the array, cast to a specified type. blurring_grid_from(mask, …) Setup a blurring-grid from a mask, where a blurring grid consists of all pixels that are masked (and therefore have their values set to (0.0, 0.0)), but are close enough to the unmasked pixels that their values will be convolved into the unmasked those pixels. blurring_grid_via_kernel_shape_from(…) Returns the blurring grid from a grid and create it as a Grid2DIterate, via an input 2D kernel shape. byteswap([inplace]) Swap the bytes of the array elements choose(choices[, out, mode]) Use an index array to construct a new array from a set of choices. clip([min, max, out]) Return an array whose values are limited to [min, max]. compress(condition[, axis, out]) Return selected slices of this array along given axis. conj() Complex-conjugate all elements. conjugate() Return the complex conjugate, element-wise. copy([order]) Return a copy of the array. cumprod([axis, dtype, out]) Return the cumulative product of the elements along the given axis. cumsum([axis, dtype, out]) Return the cumulative sum of the elements along the given axis. diagonal([offset, axis1, axis2]) Return specified diagonals. distances_to_coordinate(coordinate, …) Returns the distance of every coordinate on the grid from an input (y,x) coordinate. dot(b[, out]) Dot product of two arrays. dump(file) Dump a pickle of the array to the specified file. dumps() Returns the pickle of the array as a string. extent_with_buffer(buffer) The extent of the grid in scaled units returned as a list [x_min, x_max, y_min, y_max], where all values are buffed such that their extent is further than the grid’s extent.. fill(value) Fill the array with a scalar value. flatten([order]) Return a copy of the array collapsed into one dimension. fractional_mask_via_arrays_from(…) Returns a fractional mask from a result array, where the fractional mask describes whether the evaluated value in the result array is within the Grid2DIterate’s specified fractional accuracy. fractional_mask_via_grids_from(…) Returns a fractional mask from a result array, where the fractional mask describes whether the evaluated value in the result array is within the Grid2DIterate’s specified fractional accuracy. from_mask(mask, fractional_accuracy, sub_steps) Create a Grid2DIterate (see Grid2DIterate.__new__) from a mask, where only unmasked pixels are included in the grid (if the grid is represented in 2D masked values are (0.0, 0.0)). getfield(dtype[, offset]) Returns a field of the given array as a certain type. grid_2d_radial_projected_from(centre, …) Determine a projected radial grid of points from a 2D region of coordinates defined by an extent [xmin, xmax, ymin, ymax] and with a (y,x) centre. grid_at_sub_size_from(func, cls, mask, sub_size) grid_via_deflection_grid_from(deflection_grid) Returns a new Grid2DIterate from this grid, where the (y,x) coordinates of this grid have a grid of (y,x) values, termed the deflection grid, subtracted from them to determine the new grid of (y,x) values. item(*args) Copy an element of an array to a standard Python scalar and return it. itemset(*args) Insert scalar into an array (scalar is cast to array’s dtype, if possible) iterated_array_from(func, cls, …) Iterate over a function that returns an array of values until the it meets a specified fractional accuracy. iterated_grid_from(func, cls, grid_lower_sub_2d) Iterate over a function that returns a grid of values until the it meets a specified fractional accuracy. iterated_result_from(func, cls) Iterate over a function that returns an array or grid of values until the it meets a specified fractional accuracy. load(file_path, filename) manual_slim(grid, List[T]], shape_native, …) Create a Grid2DIterate (see Grid2DIterate.__new__) by inputting the grid coordinates in 1D, for example: max([axis, out, keepdims, initial, where]) Return the maximum along a given axis. mean([axis, dtype, out, keepdims]) Returns the average of the array elements along given axis. min([axis, out, keepdims, initial, where]) Return the minimum along a given axis. newbyteorder([new_order]) Return the array with the same data viewed with a different byte order. nonzero() Return the indices of the elements that are non-zero. output_to_fits(file_path, overwrite) Output the grid to a .fits file. padded_before_convolution_from(kernel_shape) padded_grid_from(kernel_shape_native, int]) When the edge pixels of a mask are unmasked and a convolution is to occur, the signal of edge pixels will be ‘missing’ if the grid is used to evaluate the signal via an analytic function. partition(kth[, axis, kind, order]) Rearranges the elements in the array in such a way that the value of the element in kth position is in the position it would be in a sorted array. prod([axis, dtype, out, keepdims, initial, …]) Return the product of the array elements over the given axis ptp([axis, out, keepdims]) Peak to peak (maximum - minimum) value along a given axis. put(indices, values[, mode]) Set a.flat[n] = values[n] for all n in indices. ravel([order]) Return a flattened array. relocated_grid_from(grid) Relocate the coordinates of a grid to the border of this grid if they are outside the border, where the border is defined as all pixels at the edge of the grid’s mask (see mask._border_1d_indexes). relocated_pixelization_grid_from(…) Relocate the coordinates of a pixelization grid to the border of this grid, see the method relocated_grid_from for a full description of grid relocation. repeat(repeats[, axis]) Repeat elements of an array. reshape(shape[, order]) Returns an array containing the same data with a new shape. resize(new_shape[, refcheck]) Change shape and size of array in-place. resized_from(new_shape) return_iterated_array_result(iterated_array) Returns the resulting iterated array, by mapping it to 1D and then passing it back as an Array2D structure. round([decimals, out]) Return a with each element rounded to the given number of decimals. save(file_path, filename) Save the tracer by serializing it with pickle. searchsorted(v[, side, sorter]) Find indices where elements of v should be inserted in a to maintain order. setfield(val, dtype[, offset]) Put a value into a specified place in a field defined by a data-type. setflags([write, align, uic]) Set array flags WRITEABLE, ALIGNED, (WRITEBACKIFCOPY and UPDATEIFCOPY), respectively. sort([axis, kind, order]) Sort an array in-place. squared_distances_to_coordinate(coordinate, …) Returns the squared distance of every coordinate on the grid from an input coordinate. squeeze([axis]) Remove single-dimensional entries from the shape of a. std([axis, dtype, out, ddof, keepdims]) Returns the standard deviation of the array elements along given axis. structure_2d_from(result) structure_2d_list_from(result_list) sum([axis, dtype, out, keepdims, initial, where]) Return the sum of the array elements over the given axis. swapaxes(axis1, axis2) Return a view of the array with axis1 and axis2 interchanged. take(indices[, axis, out, mode]) Return an array formed from the elements of a at the given indices. tobytes([order]) Construct Python bytes containing the raw data bytes in the array. tofile(fid[, sep, format]) Write array to a file as text or binary (default). tolist() Return the array as an a.ndim-levels deep nested list of Python scalars. tostring([order]) A compatibility alias for tobytes, with exactly the same behavior. trace([offset, axis1, axis2, dtype, out]) Return the sum along diagonals of the array. transpose(*axes) Returns a view of the array with axes transposed. trimmed_after_convolution_from(kernel_shape) uniform(shape_native, int], pixel_scales, …) Create a Grid2DIterate (see Grid2DIterate.__new__) as a uniform grid of (y,x) values given an input shape_native and pixel scale of the grid: values_from(array_slim) Create a ValuesIrregular object from a 1D NumPy array of values of shape [total_coordinates]. var([axis, dtype, out, ddof, keepdims]) Returns the variance of the array elements, along given axis. view([dtype][, type]) New view of array with the same data.
Attributes
T The transposed array. base Base object if memory is from some other object. binned Convenience method to access the binned-up grid in its 1D representation, which is a Grid2D stored as an ndarray of shape [total_unmasked_pixels, 2]. ctypes An object to simplify the interaction of the array with the ctypes module. data Python buffer object pointing to the start of the array’s data. dtype Data-type of the array’s elements. extent The extent of the grid in scaled units returned as an ndarray of the form [x_min, x_max, y_min, y_max]. flags Information about the memory layout of the array. flat A 1-D iterator over the array. flipped Return the grid as an ndarray of shape [total_unmasked_pixels, 2] with flipped values such that coordinates are given as (x,y) values. fractional_mask_via_arrays_jit_from True entries signify the function has been evaluated in that pixel to desired fractional accuracy and fractional_mask_via_grids_jit_from True entries signify the function has been evaluated in that pixel to desired fractional accuracy and imag The imaginary part of the array. in_radians Return the grid as an ndarray where all (y,x) values are converted to Radians. itemsize Length of one array element in bytes. iterated_array_jit_from Create the iterated array from a result array that is computed at a higher sub size leel than the previous grid. iterated_grid_jit_from Create the iterated grid from a result grid that is computed at a higher sub size level than the previous grid. native Return a Grid2D where the data is stored in its native representation, which is an ndarray of shape [sub_size*total_y_pixels, sub_size*total_x_pixels, 2]. nbytes Total bytes consumed by the elements of the array. ndim Number of array dimensions. origin pixel_scale pixel_scales real The real part of the array. scaled_maxima The maximum values of the grid in scaled coordinates returned as a tuple (y_max, x_max). scaled_minima The minium values of the grid in scaled coordinates returned as a tuple (y_min, x_min). shape Tuple of array dimensions. shape_native shape_native_scaled The two dimensional shape of the grid in scaled units, computed by taking the minimum and maximum values of the grid. shape_slim size Number of elements in the array. slim Return a Grid2D where the data is stored its slim representation, which is an ndarray of shape [total_unmasked_pixels * sub_size**2, 2]. strides Tuple of bytes to step in each dimension when traversing an array. sub_border_grid A property that is only computed once per instance and then replaces itself with an ordinary attribute. sub_shape_native sub_shape_slim sub_size total_pixels unmasked_grid
classmethod manual_slim(grid: Union[numpy.ndarray, List[T]], shape_native: Tuple[int, int], pixel_scales: Union[Tuple[float, float], float], origin: Tuple[float, float] = (0.0, 0.0), fractional_accuracy: float = 0.9999, sub_steps: Optional[List[int]] = None) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Create a Grid2DIterate (see Grid2DIterate.__new__) by inputting the grid coordinates in 1D, for example:
grid=np.array([[1.0, 1.0], [2.0, 2.0], [3.0, 3.0], [4.0, 4.0]])
grid=[[1.0, 1.0], [2.0, 2.0], [3.0, 3.0], [4.0, 4.0]]
From 1D input the method cannot determine the 2D shape of the grid and its mask, thus the shape_native must be input into this method. The mask is setup as a unmasked Mask2D of shape_native.
Parameters: or list (grid) – The (y,x) coordinates of the grid input as an ndarray of shape [total_unmasked_pixells*(sub_size**2), 2] or a list of lists. shape_native – The 2D shape of the mask the grid is paired with. pixel_scales – The (y,x) scaled units to pixel units conversion factors of every pixel. If this is input as a float, it is converted to a (float, float) structure. fractional_accuracy – The fractional accuracy the function evaluated must meet to be accepted, where this accuracy is the ratio of the value at a higher sub_size to othe value computed using the previous sub_size. sub_steps ([int] or None) – The sub-size values used to iteratively evaluated the function at high levels of sub-gridding. If None, they are setup as the default values [2, 4, 8, 16]. origin – The origin of the grid’s mask.
classmethod uniform(shape_native: Tuple[int, int], pixel_scales: Union[Tuple[float, float], float], origin: Tuple[float, float] = (0.0, 0.0), fractional_accuracy: float = 0.9999, sub_steps: Optional[List[int]] = None) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Create a Grid2DIterate (see Grid2DIterate.__new__) as a uniform grid of (y,x) values given an input shape_native and pixel scale of the grid:
Parameters: shape_native – The 2D shape of the uniform grid and the mask that it is paired with. pixel_scales – The (y,x) scaled units to pixel units conversion factors of every pixel. If this is input as a float, it is converted to a (float, float) structure. fractional_accuracy – The fractional accuracy the function evaluated must meet to be accepted, where this accuracy is the ratio of the value at a higher sub_size to othe value computed using the previous sub_size. sub_steps ([int] or None) – The sub-size values used to iteratively evaluated the function at high levels of sub-gridding. If None, they are setup as the default values [2, 4, 8, 16]. origin – The origin of the grid’s mask.
classmethod from_mask(mask: autoarray.mask.mask_2d.Mask2D, fractional_accuracy: float = 0.9999, sub_steps: Optional[List[int]] = None) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Create a Grid2DIterate (see Grid2DIterate.__new__) from a mask, where only unmasked pixels are included in the grid (if the grid is represented in 2D masked values are (0.0, 0.0)).
The mask’s pixel_scales and origin properties are used to compute the grid (y,x) coordinates.
Parameters: mask (Mask2D) – The mask whose masked pixels are used to setup the sub-pixel grid. fractional_accuracy – The fractional accuracy the function evaluated must meet to be accepted, where this accuracy is the ratio of the value at a higher sub_size to othe value computed using the previous sub_size. sub_steps ([int] or None) – The sub-size values used to iteratively evaluated the function at high levels of sub-gridding. If None, they are setup as the default values [2, 4, 8, 16].
classmethod blurring_grid_from(mask: autoarray.mask.mask_2d.Mask2D, kernel_shape_native: Tuple[int, int], fractional_accuracy: float = 0.9999, sub_steps: Optional[List[int]] = None) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Setup a blurring-grid from a mask, where a blurring grid consists of all pixels that are masked (and therefore have their values set to (0.0, 0.0)), but are close enough to the unmasked pixels that their values will be convolved into the unmasked those pixels. This when computing images from light profile objects.
See Grid2D.blurring_grid_from for a full description of a blurring grid. This method creates the blurring grid as a Grid2DIterate.
Parameters: mask (Mask2D) – The mask whose masked pixels are used to setup the blurring grid. kernel_shape_native – The 2D shape of the kernel which convolves signal from masked pixels to unmasked pixels. fractional_accuracy – The fractional accuracy the function evaluated must meet to be accepted, where this accuracy is the ratio of the value at a higher sub_size to othe value computed using the previous sub_size. sub_steps ([int] or None) – The sub-size values used to iteratively evaluated the function at high levels of sub-gridding. If None, they are setup as the default values [2, 4, 8, 16].
grid_via_deflection_grid_from(deflection_grid: numpy.ndarray) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Returns a new Grid2DIterate from this grid, where the (y,x) coordinates of this grid have a grid of (y,x) values, termed the deflection grid, subtracted from them to determine the new grid of (y,x) values.
This is used by PyAutoLens to perform grid ray-tracing.
Parameters: deflection_grid – The grid of (y,x) coordinates which is subtracted from this grid.
blurring_grid_via_kernel_shape_from(kernel_shape_native: Tuple[int, int]) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
Returns the blurring grid from a grid and create it as a Grid2DIterate, via an input 2D kernel shape.
For a full description of blurring grids, checkout blurring_grid_from.
Parameters: kernel_shape_native The 2D shape of the kernel which convolves signal from masked pixels to unmasked pixels.
padded_grid_from(kernel_shape_native: Tuple[int, int]) → autoarray.structures.grids.two_d.grid_2d_iterate.Grid2DIterate
When the edge pixels of a mask are unmasked and a convolution is to occur, the signal of edge pixels will be ‘missing’ if the grid is used to evaluate the signal via an analytic function.
To ensure this signal is included the padded grid is used, which is ‘buffed’ such that it includes all pixels whose signal will be convolved into the unmasked pixels given the 2D kernel shape.
Parameters: kernel_shape_native – The 2D shape of the kernel which convolves signal from masked pixels to unmasked pixels.
fractional_mask_via_arrays_from(array_lower_sub_2d: autoarray.structures.arrays.two_d.array_2d.Array2D, array_higher_sub_2d: autoarray.structures.arrays.two_d.array_2d.Array2D) → autoarray.mask.mask_2d.Mask2D
Returns a fractional mask from a result array, where the fractional mask describes whether the evaluated value in the result array is within the Grid2DIterate’s specified fractional accuracy. The fractional mask thus determines whether a pixel on the grid needs to be reevaluated at a higher level of sub-gridding to meet the specified fractional accuracy. If it must be re-evaluated, the fractional masks’s entry is False.
The fractional mask is computed by comparing the results evaluated at one level of sub-gridding to another at a higher level of sub-griding. Thus, the sub-grid size in chosen on a per-pixel basis until the function is evaluated at the specified fractional accuracy.
Parameters: array_lower_sub_2d (Array2D) – The results computed by a function using a lower sub-grid size array_higher_sub_2d (Array2D) – The results computed by a function using a higher sub-grid size.
fractional_mask_via_arrays_jit_from
• True entries signify the function has been evaluated in that pixel to desired fractional accuracy and
therefore does not need to be iteratively computed at higher levels of sub-gridding.
• False entries signify the function has not been evaluated in that pixel to desired fractional accuracy and
therefore must be iterative computed at higher levels of sub-gridding to meet this accuracy.
Type: Jitted functioon to determine the fractional mask, which is a mask where
iterated_array_from(func: Callable, cls: object, array_lower_sub_2d: autoarray.structures.arrays.two_d.array_2d.Array2D) → autoarray.structures.arrays.two_d.array_2d.Array2D
Iterate over a function that returns an array of values until the it meets a specified fractional accuracy. The function returns a result on a pixel-grid where evaluating it on more points on a higher resolution sub-grid followed by binning lead to a more precise evaluation of the function. The function is assumed to belong to a class, which is input into tthe method.
The function is first called for a sub-grid size of 1 and a higher resolution grid. The ratio of values give the fractional accuracy of each function evaluation. Pixels which do not meet the fractional accuracy are iteratively revaluated on higher resolution sub-grids. This is repeated until all pixels meet the fractional accuracy or the highest sub-size specified in the sub_steps attribute is computed.
If the function return all zeros, the iteration is terminated early given that all levels of sub-gridding will return zeros. This occurs when a function is missing optional objects that contribute to the calculation.
An example use case of this function is when a “image_2d_from” methods in PyAutoGalaxy’s LightProfile module is comomputed, which by evaluating the function on a higher resolution sub-grids sample the analytic light profile at more points and thus more precisely.
Parameters: func (func) – The function which is iterated over to compute a more precise evaluation. cls (cls) – The class the function belongs to. grid_lower_sub_2d (Array2D) – The results computed by the function using a lower sub-grid size
return_iterated_array_result(iterated_array: autoarray.structures.arrays.two_d.array_2d.Array2D) → autoarray.structures.arrays.two_d.array_2d.Array2D
Returns the resulting iterated array, by mapping it to 1D and then passing it back as an Array2D structure.
Parameters: iterated_array – The resulting array computed via iteration. iterated_array
iterated_array_jit_from
Create the iterated array from a result array that is computed at a higher sub size leel than the previous grid.
The iterated array is only updated for pixels where the fractional accuracy is met.
fractional_mask_via_grids_from(grid_lower_sub_2d: autoarray.structures.grids.two_d.grid_2d.Grid2D, grid_higher_sub_2d: autoarray.structures.grids.two_d.grid_2d.Grid2D) → autoarray.mask.mask_2d.Mask2D
Returns a fractional mask from a result array, where the fractional mask describes whether the evaluated value in the result array is within the Grid2DIterate’s specified fractional accuracy. The fractional mask thus determines whether a pixel on the grid needs to be reevaluated at a higher level of sub-gridding to meet the specified fractional accuracy. If it must be re-evaluated, the fractional masks’s entry is False.
The fractional mask is computed by comparing the results evaluated at one level of sub-gridding to another at a higher level of sub-griding. Thus, the sub-grid size in chosen on a per-pixel basis until the function is evaluated at the specified fractional accuracy.
Parameters: grid_lower_sub_2d (Array2D) – The results computed by a function using a lower sub-grid size grid_higher_sub_2d (grids.Array2D) – The results computed by a function using a higher sub-grid size.
fractional_mask_via_grids_jit_from
• True entries signify the function has been evaluated in that pixel to desired fractional accuracy and
therefore does not need to be iteratively computed at higher levels of sub-gridding.
• False entries signify the function has not been evaluated in that pixel to desired fractional accuracy and
therefore must be iterative computed at higher levels of sub-gridding to meet this accuracy.
Type: Jitted function to determine the fractional mask, which is a mask where
iterated_grid_from(func: Callable, cls: object, grid_lower_sub_2d: autoarray.structures.grids.two_d.grid_2d.Grid2D) → autoarray.structures.grids.two_d.grid_2d.Grid2D
Iterate over a function that returns a grid of values until the it meets a specified fractional accuracy. The function returns a result on a pixel-grid where evaluating it on more points on a higher resolution sub-grid followed by binning lead to a more precise evaluation of the function. For the fractional accuracy of the grid to be met, both the y and x values must meet it.
The function is first called for a sub-grid size of 1 and a higher resolution grid. The ratio of values give the fractional accuracy of each function evaluation. Pixels which do not meet the fractional accuracy are iteratively revaulated on higher resolution sub-grids. This is repeated until all pixels meet the fractional accuracy or the highest sub-size specified in the sub_steps attribute is computed.
If the function return all zeros, the iteration is terminated early given that all levels of sub-gridding will return zeros. This occurs when a function is missing optional objects that contribute to the calculation.
An example use case of this function is when a “deflections_2d_from” methods in PyAutoLens’s MassProfile module is computed, which by evaluating the function on a higher resolution sub-grid samples the analytic mass profile at more points and thus more precisely.
Parameters: func – The function which is iterated over to compute a more precise evaluation. cls – The class the function belongs to. grid_lower_sub_2d – The results computed by the function using a lower sub-grid size
iterated_grid_jit_from
Create the iterated grid from a result grid that is computed at a higher sub size level than the previous grid.
The iterated grid is only updated for pixels where the fractional accuracy is met in both the (y,x) coodinates.
iterated_result_from(func: Callable, cls: object) → Union[autoarray.structures.arrays.two_d.array_2d.Array2D, autoarray.structures.grids.two_d.grid_2d.Grid2D]
Iterate over a function that returns an array or grid of values until the it meets a specified fractional accuracy. The function returns a result on a pixel-grid where evaluating it on more points on a higher resolution sub-grid followed by binning lead to a more precise evaluation of the function.
A full description of the iteration method can be found in the functions iterated_array_from and iterated_grid_from. This function computes the result on a grid with a sub-size of 1, and uses its shape to call the correct function.
Parameters: func (func) – The function which is iterated over to compute a more precise evaluation. cls (object) – The class the function belongs to.
|
{}
|
# Expectation Maximization for latent variable models
In all the notebooks we’ve seen so far, we have made the assumption that the observations correspond directly to realizations of a random variable. Take the case of linear regression: we are given observations of the random variable $t$ (plus some noise), which is the target value for a given value of the input $\mathbf{x}$. Under some criterion, we find the best parameters $\boldsymbol{\theta}$ of a model $y(\mathbf{x}, \boldsymbol{\theta})$ that is able to explain the observations and yield predictions for new inputs.
In the more general case, we have a dataset of observations $\mathcal{D} = \lbrace \mathbf{x_1}, …, \mathbf{x_N}\rbrace$. We hypothesize that each observation is drawn from a probability distribution $p(\mathbf{x_i}\vert\boldsymbol{\theta})$ with parameters $\boldsymbol{\theta}$. It is sometimes useful to think of this as a probabilistic graphical model, where nodes represent random variables and edges encode dependency relationships between them. In this case, the graph looks as follows:
In this graph we show that we have $N$ observations by enclosing the random variables within a plate. This also represents the fact that we assume the observations to be independent.
For many situations this model works, that is, the model is able to explain the data and can be used to make predictions for new observations. In other cases, however, this model is not expressive enough.
Imagine that we have a single dimensional variable $x$ of which we observe some samples. Our first hypothesis is that $p(x\vert\boldsymbol{\theta})$ is a normal distribution, so we proceed to find the mean and variance of this distribution using maximum likelihood estimation (MLE):
%matplotlib inline
import numpy as np
from scipy.stats import norm
import matplotlib.pyplot as plt
from data.synthetic import gaussian_mixture
X = gaussian_mixture(200)
# The MLE estimates are the sample mean and standard deviation
mean = np.mean(X)
std = np.std(X)
# Plot fit on top of histogram
fig, ax1 = plt.subplots()
ax1.hist(X, alpha=0.4)
ax1.set_ylabel('Counts')
ax1.set_xlabel('x')
ax2 = ax1.twinx()
x = np.linspace(-4, 4, 100)
ax2.plot(x, norm.pdf(x))
ax2.set_ylim([0, 0.5])
ax2.set_ylabel('Probability density');
Clearly, once we have actually examined the data, we realize that a single normal distribution is not a good model. The data seems to come from a multimodal distribution with two components, which a single Gaussian is not able to capture. In this case we are better off by changing our model to a mixture model, a model that mixes two or more distributions. For the example above, it would seem that a mixture of two components, centered at -2 and 2 would be a better fit.
Under this idea, our hypothesis is the next: there are $K$ components in the mixture model. We start by selecting a component $k$ with some probability $\pi_k$ and, given the selected component, we then draw a sample $x$ from a normal distribution $\mathcal{N}(x\vert\mu_k, \sigma_k^2)$.
We can think of the component as a discrete random variable $\mathbf{z}$ that can take values from 1 up to $K$. Therefore, to each sample $x$ there is an associated value of $\mathbf{z}$. Since we do not observe $\mathbf{z}$, we call it a latent variable. If we collapse the parameters $\pi_k$, $\mu_k$ and $\sigma_k$ into a single parameter $\boldsymbol{\theta}$, the graphical model for the general case is now the following:
Note how the model emphasizes the fact that each observation has an associated value of the latent variable. Also note that since $z$ is not observed, its node is not shaded.
We can now proceed to find the parameters of our new model, using MLE (or maximum a posteriori, if we have priors on the parameters). However, as it usually is the case, we have traded tractability for expressiveness by introducing latent variables. If we attempt to maximize the log-likelihood, we find that we need to maximize
\begin{align} \sum_{n=1}^N\log \sum_{z}p(x_n\vert z_n, \boldsymbol{\theta})p(z_n\vert\boldsymbol{\theta}) \end{align}
which due to the summation inside the logarithm, does not result in a closed form solution. An alternative is to use the Expectation Maximization algorithm, an iterative procedure that can be used to find maximum likelihood estimates of the parameters, which we motivate next.
Let $\mathbf{X}$ and $\mathbf{Z}$ denote the set of observed and latent variables, respectively, for which we have defined a joint parametric probability distribution
$p(\mathbf{X}, \mathbf{Z}\vert\boldsymbol{\theta}) = p(\mathbf{X}\vert\mathbf{Z},\boldsymbol{\theta})p(\mathbf{Z}\vert\boldsymbol{\theta})$
It can be shown [1, 2] that for any distribution $q(\mathbf{Z})$ we can decompose the log-likelihood as
$\log p(\mathbf{X}\vert\boldsymbol{\theta}) = \mathcal{L}(q, \boldsymbol{\theta}) + \text{KL}(q(\mathbf{Z})\Vert p(\mathbf{Z\vert\mathbf{X}, \boldsymbol{\theta}}))\tag{1}$
where $\text{KL}$ is the Kullback-Leibler divergence, and $\mathcal{L}(q,\boldsymbol{\theta})$ is known as the Evidence Lower Bound (ELBO), because since the KL divergence is always non-negative, it is a lower bound for $p(\mathbf{X}\vert\boldsymbol{\theta})$. The ELBO is defined as
$\mathcal{L}(q, \boldsymbol{\theta}) = \mathbb{E}_q[\log p(\mathbf{X},\mathbf{Z}\vert\boldsymbol{\theta})] - \mathbb{E}_q[\log q(\mathbf{Z})]\tag{2}$
Note that these expectations are taken with respect to the distribution $q(\mathbf{Z})$.
We can now use this decomposition in the EM algorithm to define two steps:
• E step: Initialize the parameters with some value $\boldsymbol{\theta}^\prime$. In equation 1, close the gap between the lower bound and the likelihood by making the KL divergence equal to zero. We achieve this by setting $q(\mathbf{Z})$ equal to the posterior $p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime)$, which usually involves using Bayes’ theorem to calculate
$p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime) = \frac{p(\mathbf{X}\vert\mathbf{Z},\boldsymbol{\theta}^\prime)p(\mathbf{Z}\vert\boldsymbol{\theta}^\prime)}{p(\mathbf{X}\vert\boldsymbol{\theta}^\prime)}$
• M step: now that the likelihood is equal to the lower bound, maximize the lower bound in equation 2 with respect to the parameters. We find
$\boldsymbol{\theta}^{\text{new}} = \arg\max_{\boldsymbol{\theta}} \mathbb{E}_q[\log p(\mathbf{X},\mathbf{Z}\vert\boldsymbol{\theta})]$
where we have dropped the second term in equation 2 as it does not depend on the parameters, and the expectation is calculated with respect to $q(\mathbf{Z}) = p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta}^\prime)$. In this step we calculate the derivatives with respect to the parameters and set them to zero to find the maximizing values.
This process is repeated until convergence.
## A practical example
We will use this idea to fit a mixture model from a subset of the famous MNIST dataset. In this dataset each image is of size 28 by 28, containing a handwritten number between 0 and 9, although for simplicity we will take only the digits 4, 5, and 2.
We will process the images so that they are binary, so that a pixel can be either 1 or 0. The images are flattened, so that a digit is represented by a vector of 28 $\times$ 28 = 784 values.
from sklearn.datasets import fetch_openml
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
# Fetch data from OpenML.org
X, y = fetch_openml('mnist_784', version=1, return_X_y=True)
# Take three digits only
sample_indices = []
for i, label in enumerate(y):
if label in ['4', '5', '2']:
sample_indices.append(i)
X = X[sample_indices]
y = y[sample_indices]
# Binarize
X_bin = np.zeros(X.shape)
X_bin[X > np.max(X)/2] = 1
# Visualize some digits
plt.figure(figsize=(14, 1))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_bin[i].reshape((28, 28)), cmap='bone')
plt.axis('off')
Our hypothesis is that each pixel $x_i$ in the image $\mathbf{x}$ is a Bernoulli random variable with probability $\mu_i$ of being 1, and so we define the vector $\boldsymbol{\mu}$ as containing the probabilities for each pixel.
The probabilities can be different depending on whether the number is a 2, a 4 or a 5, so we will define 3 components for the mixture model. This means that there will be 3 mean vectors $\boldsymbol{\mu}_i$ and each component will have a probability $\pi_k$. These are the parameters that we will find using the EM algorithm, which I have implemented in the BernoulliMixture class.
from mixture_models.bernoulli_mixture import BernoulliMixture
model = BernoulliMixture(dimensions=784, n_components=3, verbose=True)
model.fit(X_bin)
Iteration 9/100, convergence: 0.000088
Terminating on convergence.
We can now observe the means and the mixing coefficients of the mixture model after convergence of the EM algorithm, which are stored in the model.mu attribute:
plt.figure(figsize=(10, 4))
for i, component_mean in enumerate(model.mu):
plt.subplot(1, 3, i + 1)
plt.imshow(component_mean.reshape((28, 28)), cmap='Blues')
plt.title(r'$\pi_{:d}$ = {:.3f}'.format(i + 1, model.pi[i]))
plt.axis('off')
We can see that the means correspond to the three digits in the dataset. In this particular case, given a value of 1 for the latent variable $\mathbf{z}$ (corresponding to digit 4), the observation (the image) is a sample from the distribution whose mean is given by the one to the left in the plots above. The mixing coefficients give us an idea of the proportion of instances of each digit in the dataset used to train the model.
As we have seen, introducing latent variables has allowed us to specify complex probability distributions that are likely to provide more predictive power in some applications. Other examples include mixtures of Gaussians for clustering applications, where usually an algorithm like K-means fails at capturing different covariances for each cluster; and non-trivial probabilistic models of real life data, such as click models [2].
It is important to note that the EM algorithm is not the ultimate way to find parameters of models with latent variables. On one hand, the method is sensitive to the initialization of the parameters and might end up at a local maximum of the log-likelihood. On the other hand, we have assumed that we can calculate the posterior $p(\mathbf{Z}\vert\mathbf{X},\boldsymbol{\theta})$, which sometimes is not possible for more elaborate models. In such cases it is necessary to resort to approximate methods such as sampling and variational inference. This last area has led to many recent advances, including the Variational Autoencoder, which I would like to discuss in the near feature.
### References
[1] Murphy, K. P. (2012). Machine Learning: A Probabilistic Perspective. The MIT Press.
[2] Dan Piponi, “Expectation-Maximization with Less Arbitrariness”, blog post at http://blog.sigfpe.com/2016/10/expectation-maximization-with-less.html.
[2] Chuklin, Aleksandr, Ilya Markov, and Maarten de Rijke. “Click models for web search.” Synthesis Lectures on Information Concepts, Retrieval, and Services 7, no. 3 (2015): 1-115.
Tags:
Updated:
|
{}
|
# Installing Blender 2.78.1 - Cycles Disney Brdf
Recently I learned that there are several initiatives to make Physically-Based Shading available on Blender. One of these is the development of a Cycles Disney BRDF.
I have a couple of questions on installing this Blender development on my computer.
1. Is this Blender Branch Windows only ? I would like to install it on macOS.
2. As I have a Windows machine too .. how would I install such a branche on my Windows machine
Edit: after reading the answer of @aliasguru I understand that GraphicAll makes new developments available without having to compile code ?
If I push that download button what am I downloading and how do I install that as a experimental feature to my Blender 2.78 installation ? Is this possible for macOS ?
# Tech Stuff first
As most Blender Branches, they are cross-platform and can be compiled for any supported operating system. Since support for compiling Blender is out of scope of BSE, I won't go into detail on how to do it. However, the documentation gives a very good starting point: https://wiki.blender.org/index.php/Dev:Doc/Building_Blender Note that for adding the Disney Shader, you'll need to checkout a branch with GIT that actually contains the code. One such branch is the Experimental one.
# Easy solution for quick evaluation on Windows
Some people actually go for the extra effort and provide self made Blender Builds on a dedicated website called GraphicAll. You can search for keywords there, Disney will give you this page as a result: http://graphicall.org/1192
Just download and unzip it to your hard drive (this is a Windows Build), and run blender.exe from that folder directly. You'll find the Disney Shader in the Add section of the node editor, together with the Diffuse, Glossy, Anisotropic,...
• Just realised the second part of my answer is pretty redundand, apart from the info that you don't have to install anything. Unzip and run works fine. MacOS builds I haven't tried yet, but if I can get hold of a machine I'd like to give it a shot. – aliasguru Dec 21 '16 at 14:55
• you say "need to checkout a branch with GIT that actually contains the code. One such branch is the Experimental one.". I am not a programmer and not familiar with Blender Development and I cannot find some documents that make things understandable for me. So, if you don't mind, what is GIT ? what is a branch ? Where can I find Experimental ? – Old Man Dec 21 '16 at 15:28
• About GraphicAll ... is this not "official Blender development" ? You name it self made Blender Builds ... I thought that this Cycles Disney Brdf is part of the Blender software ? – Old Man Dec 21 '16 at 15:33
• @OldMan You're mixing up two things: a) the source code, and b) compiled binaries of the source code, i.e. applications that you can run. GIT is a version control system which helps programmers to keep track of changes in code. A branch is something like a spin-off version of a code. 'Experimental' is a name of a Blender Source Code branch. If you download a stable release, you are indeed running the branch called 'master'. 'Experimental' is a branch which has been changed, in this case one of the changes is the implementation of the Disney shader. – aliasguru Dec 21 '16 at 15:54
• @OldMan Unless you have at least some understanding of coding and/or using version control, you'll find it really hard to get a first self-compiled Blender up and running. But again, the docs I linked are a step by step guideline which helps you through. Regarding braches, there is lots of them. The purpose is to enable developers to test things without doing any harm on the master branch. – aliasguru Dec 21 '16 at 16:04
|
{}
|
# Meaning of this two-variable asymptotic notation
What does $$k \ln(k) = \Theta(n)$$ mean? Does it mean that $$k$$ is a function of $$n$$ and we actually had better write $$k(n)\ln(k(n)) = \Theta(n)$$?
• If $k$ and $n$ are independent, then we have constant belongs to $\Theta$, which, of course is true, but is it what you want? Otherwise, with $k(n)$ or $n(k)$, everything depends on these functions. Aug 1 at 11:55
• @zkutch It is part of an exercise of CLRS so I was wondering what it means. Exercise 3.2.8. It looks like it should be a function rather than a constant.
Aug 1 at 11:58
• Sorry, I am unable to find "3.2.8" - is it correct/exact numbering? Aug 1 at 12:03
• It is 3.2-8. Sorry. On page 60.
Aug 1 at 12:04
• It is the last exercise before the problems.
Aug 1 at 12:05
It roughly means "for any two constants $$0 < c < C$$, the following holds whenever $$k,n \in \mathbb{N}$$ and $$cn \leq k \ln k \leq Cn$$".
For example, consider the claim "if $$k = \Theta(n)$$ then $$(1+1/n)^k = \Theta(1)$$". This means "for any two constants $$0 < c < C$$ there exist constants $$0 < m < M$$ such that following holds whenever $$k,n \in \mathbb{N}$$ and $$cn \leq k \leq Cn$$: $$m \leq (1+1/n)^k \leq M$$".
The assumptions on the domain of $$k,n$$ are usually clear from context, but do create an ambiguity, which is why the interpretation above is only rough.
Problem 3.2-8 from CLRS is as follows:
Show that $$k\ln k = \Theta(n)$$ implies $$k = \Theta\left(\frac{n}{\ln n}\right)$$.
We can expand this as follows:
For any two constants $$0 there exist constants $$0 such that the following holds: if $$k,n \in \mathbb{N}$$ satisfy $$cn \leq k\ln k \leq Cn$$ then $$m\frac{n}{\ln n} \leq k \leq \frac{n}{\ln n}$$.
However, this is certainly not the intended interpretation, since $$\ln 1 = 0$$, and so we get a division by zero! Instead, I propose the following interpretation:
For any two constants $$0 there exist constants $$0 and $$N>0$$ such that the following holds: if $$k,n \geq N$$ satisfy $$cn \leq k\ln k \leq Cn$$ then $$m\frac{n}{\ln n} \leq k \leq M\frac{n}{\ln n}$$.
This is in the spirit of the usual definition of big O.
The idea of the proof is that since $$k\ln k = \Theta(n)$$, we have $$\ln k = \Theta(\ln n)$$. Showing this requires some arithmetic.
• So I tried to introduce some $m$ and $M$ for problem 3.2-8 but I failed. Can you give me some hint on that?
• The problem says: Show that $kln(k) = \Theta(n)$ implies $k = \Theta(\frac{n}{ln(n)})$
|
{}
|
# Preserving global structure¶
CPU times: user 220 ms, sys: 12 ms, total: 232 ms
Wall time: 230 ms
Data set contains 44808 samples with 50 features
To avoid constantly specifying colors in our plots, define a helper here.
## Easy improvements¶
Standard t-SNE, as implemented in most software packages, can be improved in several very easy ways that require virtually no effort in openTSNE, but can drastically improve the quality of the embedding.
### Standard t-SNE¶
First, we’ll run t-SNE as it is implemented in most software packages. This will serve as a baseline comparison.
CPU times: user 6min 4s, sys: 15.7 s, total: 6min 20s
Wall time: 53.4 s
### Using PCA initialization¶
The first, easy improvement we can get is to “inject” some global structure into the initialization. The intialization dictates which regions points will appear in, so adding any global structure to the initilization can help.
Note that this is the default in this implementation and the parameter can be omitted.
CPU times: user 6min 7s, sys: 15.1 s, total: 6min 22s
Wall time: 53.5 s
### Using cosine distance¶
Typically, t-SNE is used to create an embedding of high dimensional data sets. However, the notion of Euclidean distance breaks down in high dimensions and the cosine distance is far more appropriate.
We can easily use the cosine distance by setting the metric parameter.
CPU times: user 6min 16s, sys: 14.5 s, total: 6min 31s
Wall time: 54 s
### Using PCA initialization and cosine distance¶
Lastly, let’s see how our embedding looks with both the changes.
CPU times: user 6min 2s, sys: 13.6 s, total: 6min 16s
Wall time: 51.6 s
### Summary¶
We can see that we’ve made a lot of progress already. We would like points of the same color to appear close to one another.
This is not the case in standard t-SNE and t-SNE with cosine distance, because the green points appear on both the bottom and top of the embedding and the dark blue points appear on both the left and right sides.
This is improved when using PCA initialization and better still when we use both PCA initialization and cosine distance.
## Using perplexity¶
Perplexity can be thought of as the trade-off parameter between preserving local and global structure. Lower values will emphasise local structure, while larger values will do a better job at preserving global structure.
### Perplexity: 500¶
CPU times: user 28min 32s, sys: 12.9 s, total: 28min 45s
Wall time: 3min 41s
## Using different affinity models¶
We can take advantage of the observation above, and use combinations of perplexities to obtain better embeddings.
In this section, we describe how to use the tricks described by Kobak and Berens in “The art of using t-SNE for single-cell transcriptomics”. While the publication focuses on t-SNE applications to single-cell data, the methods shown here are applicable to any data set.
When dealing with large data sets, methods which compute large perplexities may be very slow. Please see the large_data_sets notebook for an example of how to obtain a good embedding for large data sets.
### Perplexity annealing¶
The first trick we can use is to first optimize the embedding using a large perplexity to capture the global structure, then lower the perplexity to something smaller to emphasize the local structure.
CPU times: user 28min 39s, sys: 13.6 s, total: 28min 53s
Wall time: 3min 43s
CPU times: user 10.3 s, sys: 644 ms, total: 10.9 s
Wall time: 2.01 s
CPU times: user 2min 6s, sys: 4.6 s, total: 2min 11s
Wall time: 16.4 s
### Multiscale¶
One problem when using a high perplexity value e.g. 500 is that some of the clusters start to mix with each other, making the separation less apparent. Instead of a typical Gaussian kernel, we can use a multiscale kernel which will account for two different perplexity values. This typically results in better separation of clusters while still keeping much of the global structure.
CPU times: user 8min 28s, sys: 6.88 s, total: 8min 34s
Wall time: 1min 19s
CPU times: user 1.98 s, sys: 140 ms, total: 2.12 s
Wall time: 115 ms
Now, we just optimize just like we would standard t-SNE.
## Comparison to UMAP¶
/home/ppolicar/local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/nndescent.py:92: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/utils.py", line 409:
@numba.njit(parallel=True)
def build_candidates(current_graph, n_vertices, n_neighbors, max_candidates, rng_state):
^
current_graph, n_vertices, n_neighbors, max_candidates, rng_state
/home/ppolicar/local/miniconda3/envs/tsne/lib/python3.7/site-packages/numba/typed_passes.py:293: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
state.func_ir.loc))
/home/ppolicar/local/miniconda3/envs/tsne/lib/python3.7/site-packages/numba/typed_passes.py:293: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
state.func_ir.loc))
/home/ppolicar/local/miniconda3/envs/tsne/lib/python3.7/site-packages/numba/typed_passes.py:293: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
state.func_ir.loc))
/home/ppolicar/local/miniconda3/envs/tsne/lib/python3.7/site-packages/numba/typed_passes.py:293: NumbaPerformanceWarning:
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.
To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.
File "../../../local/miniconda3/envs/tsne/lib/python3.7/site-packages/umap/nndescent.py", line 47:
@numba.njit(parallel=True)
def nn_descent(
^
state.func_ir.loc))
CPU times: user 22min 41s, sys: 49.1 s, total: 23min 30s
Wall time: 11min 37s
|
{}
|
# Locate user of an Android device
I am new to oriented object paradigm and I work on Android using Java project for an internship.
I must be able to locate the user and some around buildings I read stuff about how to setup LocationListener and all, and decided that I better write a class that manage everything for me.
public class NetCampusLocation {
private Activity activity;
private LocationManager locationManager;
private LocationListener locationListener;
private Location freshLocation;
public NetCampusLocation(Activity activity) {
this.activity = activity;
this.locationManager = (LocationManager) this.activity.getSystemService(Context.LOCATION_SERVICE);
this.locationListener = new LocationListener() {
@Override
public void onLocationChanged(Location location) {
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
}
@Override
public void onProviderEnabled(String provider) {
}
@Override
public void onProviderDisabled(String provider) {
}
};
this.freshLocation = this.locationManager.getLastKnownLocation(LocationManager.GPS_PROVIDER);
if (this.freshLocation == null) {
this.freshLocation = this.locationManager.getLastKnownLocation(LocationManager.NETWORK_PROVIDER);
}
}
public void setLocationUpdate(String provider, int minTimeInterval, int minDistance, LocationListener listener) {
}
public void stopLocationUpdate() {
}
}
That is the first time I am writing a class on my own (without any pedagogic goal) and I ever seen an example with an interface nested to a class, I am wondering if this is even logical or totally absurd.
I am also wondering if this a good practice to pass my activity to the constructor or the class or not.
All kind of advices would be really good.
-
I change your title, I hope I express what the code do. (that's what we want in the title) It's Java and not JAVA. – Marc-Andre Jul 16 at 13:48
I am new to the code review community and my question was more about "Am I doing something horrendous regarding oriented object programming or not" but your title seem more appropriate. – Swann Polydor Jul 16 at 13:50
Well I will redo my comment : Welcome to Code Review :D! Your question seems good, but your title is not quite what your question would deserve. I've changed it to represent what your code do. You can add your question about what you want to be reviewed inside your text. I hope you will have good reviews! (I'm sorry if I may had sound rude or something that was not the goal at all!) – Marc-Andre Jul 16 at 13:52
One thing I'd suggest, which most (all?) of the Google/Android tutorials have, is class variables start with m, so mLocationManager, mLocationListener etc. I may be wrong on this, but I believe this to denote that the variable belongs to the class, so you can quickly type m and get a list of all the variables in your class. – Tom Hart Jul 16 at 15:20
It is good practice to mark as many fields as possible as final.
private final Activity activity;
private final LocationManager locationManager;
private final LocationListener locationListener;
I assume that freshLocation will be changed in your locationListener so that one should not be final.
All fields that only get initialized once can be marked final.
You seem to only be using your activity inside the constructor, so you don't need to keep that as a field at all.
Also, instead of passing an Activity, it's enough to pass a Context (Activity extends Context so you can still pass an activity). The only method you use on the activity is getSystemService which is part of the Context class. Contexts are often required to pass to various methods in Android, so that is perfectly fine. It is better to pass a Context than an Activity.
this.locationListener = new LocationListener() {
@Override
public void onLocationChanged(Location location) {
It is totally fine to do it this way. This is an anonymous inner class. The alternative is to use an inner class (non-anonymous), this would reduce code from your constructor but that code would be added in other parts of the class instead so which way you go doesn't matter much. This is fine.
-
I won't change directly LocationManager nor LocationListener but I will call method to modify it like this.locationManager.removeUpdates(this.locationListener); for example, is this still fine with the final field ? Also to pass a context to I need to call the constructor using getApplicationContext() ? – Swann Polydor Jul 16 at 14:17
@SwannPolydor Yes you can still use this.locationManager.removeUpdates(this.locationListener); even if locationManager and/or locationListener are final fields. – Simon André Forsberg Jul 16 at 14:21
@SwannPolydor You don't need to change anything when passing the context, because an activity is a context you can still pass the same activity as you did before. – Simon André Forsberg Jul 16 at 14:22
I understand about the context part now. But I was not asking about the fact that the field is private but that it is "final" I am not really aware of all this things, but I will check out by my own. – Swann Polydor Jul 16 at 14:23
@SwannPolydor Sorry, it was a typo. I meant final of course. – Simon André Forsberg Jul 16 at 14:25
This piece of code clutter almost all your method space :
this.locationListener = new LocationListener() {
@Override
public void onLocationChanged(Location location) {
}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {
}
@Override
public void onProviderEnabled(String provider) {
}
@Override
public void onProviderDisabled(String provider) {
}
};
If you really need an empty implementation of the LocationListener, you can simply omit white-space in method like so.
this.locationListener = new LocationListener() {
@Override
public void onLocationChanged(Location location) {}
@Override
public void onStatusChanged(String provider, int status, Bundle extras) {}
@Override
public void onProviderEnabled(String provider) {}
@Override
public void onProviderDisabled(String provider) {}
};
This will take less space in your method and we still understand that it's an empty implementation. I find it weird that there is no default implementation that you could use, but this is not that much of a problem.
-
The code was more about "is this right to do something like that" (like the skeleton) more than my code's content itself, obviously nothing is really functional in this state, so I will fill some of the interface LocationListener's methods. – Swann Polydor Jul 16 at 14:15
|
{}
|
ABSTRACTS OF PAPERS ERICH W. ELLERS ABSTRACTS OF RECENT PAPERS - from 1998
ERICH W. ELLERS
1. Hermitian presentations of Chevalley groups II.
A Chevalley group is called Hermitian if its root system is 3-graded. In this case, the roots of degree 0 are called compact and the remaining ones (those of degree 1 or -1) are called noncompact. Here, we modify the classical Steinberg presentation of a Chevalley group in a way that its generators become the symbols indexed by noncompact roots (the noncompact symbols); the new presentation is referred to as Hermitian. Either a compact symbol (that is, a symbol indexed by a compact root) or the commutator of a compact symbol and a noncompact one can be expressed as a product of noncompact symbols by Chevalley's commutator formula. Combining these expressions, one obtains a formula that displays a pair of concatenated commutators and involves noncompact symbols only. We get a presentation of the same Chevalley gr oup when we replace Chevalley's commutator formula by this new double commut ator formula and restrict the other relations to noncompact symbols. The si mply-laced case has already been treated in our paper Hermitian presentations of Chevalley groups I , J. Algebra 276 (2004) 371--382. Here we proceed with an intrinsic investigatio n of the general case. In the process, we give a detailed analysis of the struct ure constants as well as higher order constants of the Chevalley algebra associa ted with our Chevalley group. In particular, we see that there are fewer choices for the signs of the coefficients appearing in the double commutator formula th an there are in Chevalley's commutator formula; actually, we show that the forme r are in one-to-one correspondence with the choice of signs produced by the nonc ompact vectors in a Chevalley basis of the above Chevalley algebra when seen as a basis for the Lie triple system they span. In the end we give examples of Herm itian presentations for the types B_n and C_n and a review of basic properties of 3-graded root systems.
2. The Coxeter Legacy---Reflections and Projections. (Editor, together with Chan dler Davis). American Mathematical Society and Fields Institute for Research in Mathematical Sciences; Providence, RI; Toronto, Ontario; March 2006
Donald Coxeter infused enthusiasm, even passion, for mathematics in people of an y age, any background, any profession, any walk of life. Enchanted by Euclidean geometry, he was interested in the beauty, the description, and the exploration of the world around us. His involvement in art and with artists earned him admir ation and friends in the intellectual community all over the globe. Coxeter's de votion to polytopes and his interest in the theory of configurations live on in his students and followers. Coxeter groups arise in various subjects in applied mathematics, and they have a permanent place in some of the most demanding and f ascinating branches of abstract mathematics, such as Lie algebras, algebraic gro ups, Chevalley groups, and Kac-Moody groups. This collection of articles by outs tanding researchers and expositors is intended to capture the essence of the Cox eter Legacy. It is a mixture of surveys, up-to-date information, history, story telling, and personal memories; and it includes a rich variety of beautiful ill ustrations.
3. Siegel transformations for even characteristic. (with O. Villa) Linear Al gebra Appl. 395 (2005) 163--174
Let V be a vector space over a field K of even characteristic containing more th an 2 elements. Suppose K is perfect and pi is an element in the special o rthogonal group SO(V) with path dimension B(pi) equal to 2d. Then pi is a product of d-1 Siegel transformations and one transformation kappa in SO(V) with path dimension B(kappa) equal to 2. The length of pi with respect to the Siegel transformations is d if pi is unipotent or i f the dimension of the quotient group B(pi) over its radical is greater o r equal to 4; in all other cases it is d+1.
4. Hermitian presentations of Chevalley groups I.
We give a presentation for a Chevalley group arising from a Hermitian Lie algebra whose roots have all the same length. This is a variant of Steinberg's presentation of a general Chevalley group, using only noncompact roots.
5. Conjugacy classes of involutions in the Lorentz group Omega(V) and in SO(V).
The Lorentz group Omega(V) is bireflectional and all involutions in Omega(V) are conjugate. More generally, we give conditions for two involutions to be conjugate in SO(V), provided that V is a vector space over a finite field or over an ordered field.
6. The special orthogonal group is trireflectional.
Let K be a field of even characteristic, V a finite-dimensional vector space over K, and SO(V) the special orthogonal group. Then SO(V) is trireflectional, provided dim(V)>2 and SO(V) is distinct from $O+\left(4,2\right)$.
7. Intersection of conjugacy classes with Bruhat cells in Chevalley groups.
Let G be a simple and simply-connected algebraic group that is defined and quasi-split over a field K. We investigate properties of intersections of Bruhat cells of G with conjugacy classes C of G, in particular, we consider the question, when is such an intersection not empty.
8. Coxetergruppen - ein Beispiel.
9. Products of involutions in the finite Chevalley groups of type $F$4(K).
Let K be a finite field of odd characteristic and let G be a Chevalley group of type $F$4(K). We find sufficient conditions for an element in G to be a product of two or three involutions.
10. Products of transvections in one conjugacy class of the symplectic group over the p-adic numbers.
Every element in the symplectic group over the field of p-adic numbers (p>3) is a product of transvections in a single conjugacy class. We determine the minimal number of factors needed in any such product for transformations with path dimensions 1, 2, and 3. For indecomposable symplectic transformations with path dimensions 4, 5, and 6 we find upper bounds for the minimal number of factors. Results of Knüppel can now be applied to obtain similar upper bounds for transformations with higher path dimensions.
11. Products of involutions in simple Chevalley groups
Let G be a Chevalley group defined over a field K. If K contains enough elements, then every element in G is a product of five or fewer involutions. The subgroup N of G is generated by involutions provided G is not of type $C$r or $B$2.
12. Gauss decomposition with prescribed semisimple part: short proof
We give a uniform short proof of the fact that the intersection of every noncentral conjugacy class in a Chevalley group and a big Gauss cell is nonempty and that this intersection contains elements with any prescribed semisimple part. The interest in this property stems at least in part from its relation to Ore's and Thompson's conjectures in the theory of finite groups.
13. Bireflectionality of orthogonal and symplectic groups of characteristic 2
Let V be a finite dimensional vector space over a field K of characteristic 2. Let O(V) be the orthogonal group defined by a nondegenerate quadratic form. Then every element in O(V) is a product of two elements of order 2, unless all nonsingular subspaces of V are at most 2-dimensional. If V is a nonsingular symplectic space, then every element in the symplectic group Sp(V) is a product of two elements of order 2, except if dim V=2 and |K|=2.
14. Intersection of conjugacy classes of Chevalley groups with Gauss cells
Let G be a proper Chevalley group or a finite twisted Chevalley group. We give some description of the intersections of noncentral conjugacy classes of G with certain Gauss cells, which we call Coxeter cells. This generalizes a previous result of the authors. It is also the basis of yet another generalization of the same result involving weight functions on conjugacy classes.
15. Covering numbers for Chevalley groups
Let G be a quasisimple Chevalley group. We give an upper bound for the covering number cn(G) which is linear in the rank of G, i. e., we give a constant d such that for every noncentral conjugacy class C of G we have $Crd=G$, where r = rankG.
16. A generalization of Sourour's theorem
Let X be an invertible n by n matrix, n > 1, with entries in some field K. Assume X is not equal to diag (a,...,a) for any a in K. Then for every sequence (a1,..., an-1), where ai in K, there is a matrix Y with det Y=1 such that the n-1 principal minors of YXY-1 have the values a1,...,an-1 respectively.
17. On the conjectures of J. Thompson and O. Ore
If G is a finite simple group of Lie type over a field containing more than 8 elements (for twisted groups $lX$n(ql) we require q > 8, except for $2B$2(q2), $2G$2(q2), and $2F$4(q2), where we assume q2 > 8), then G is the square of some conjugacy class and consequently every element in G is a commutator.
18. Groups satisfying Scherk's length theorem
We consider subgroups G of the general linear group GL(n,K), where char K is distinct from 2. If G is generated by the set S of its simple involutions, if -1V is an element in G, and if Scherk's length theorem holds for G, then G is a subgroup of an orthogonal group.
Updated Jan 2006
Back to Ellers web page
|
{}
|
# NA in Indirect Effects for 95% likelihood based CI’s metaSEM
16 posts / 0 new
Offline
Joined: 12/28/2016 - 15:36
NA in Indirect Effects for 95% likelihood based CI’s metaSEM
AttachmentSize
9.45 KB
22.61 KB
Dear Mike and Others,
I am trying to estimate a random effects tssem for my dissertation.I have read your book and related papers. I am following the wonderful resources provided by you and your team. My goal is to perform some moderator analyses using categorical variables, after I successfully run the tssem model.
I am attaching my R script and the structural model image. In this data and model, I found 2 issues, and have 2 clarifications.
1. Some of the 95% likelihood based CI’s are shown as “NA”. This happens mainly for the indirect effects – for example for my main tssem2 model – the first one in my R code. This issue is more pronounced when I run moderator analyses and estimate two tssem2 models (split based on the categorical moderator )after I perform the moderator analysis. In these cases, the lbound and ubound values of even the direct effects are showing as “NA”. Can you please let me know if I have set up anything wrong with respect to my model specification or data. Please let me know if and how I have to use starting values from the prior estimation?
2. I get the following warning message when I run some of the tssem2 models. For example, in my first moderator analysis with variable “tc”.
Warning message:
In .solve(x = object$mx.fit@output$calculatedHessian, parameters = my.name) :
Error in solving the Hessian matrix. Generalized inverse is used. The standard errors may not be trustworthy.
I assume I can ignore this warning given that I am primarily using 95% likelihood based CI’s, and provided R can estimate these 95% likelihood based CI’s for all my parameters.
1. I tried to not use the intervals="LB" option and see if I atleast get standard errors. Though I was successful in getting the standard errors and the CI for the direct effects, I could not get them for the indirect effects. Moreover, due to the following warning message, I was not sure if I can report them for review to a top journal.
Warning message:
In vcov.wls(object, R = R) :
Parametric bootstrap with 50 replications was used to approximate the sampling covariance matrix of the parameter estimates. A better approach is to use likelihood-based confidence interval by including the intervals.type="LB" argument in the analysis.
My question is , can I report these std errors? Or can I increase the replications? How do I obtain the std errors for indirect effects when we do not specify the intervals="LB" option?
1. This is a clarification regarding my setup of the S matrix. In my model (please see the figure attached), since I am not explicitly modeling the link between T to J or vice versa, I wanted to correlate them. Can you please verify if my S matrix makes sense in this regard? Is it okay if I do not correlate them?
Regards,
Srikanth Parameswaran
Offline
Joined: 10/08/2009 - 22:37
Dear Srikanth,
Dear Srikanth,
For (1), there are two methods. The first one is to rerun your model with rerun(). A better method is to use diag.constraints=FALSE, which is more stable as there is no constraint involved.
For (2) and (3), the LBCIs can be found even through the Hessian matrix may not be positively definite. I am not sure if the LBCIs are still trustworthy. The OpenMx team should know much more than I do. They may provide some insights on this topic.
For (4), it looks fine. You may check the attached figures.
Mike
File attachments:
Offline
Joined: 12/28/2016 - 15:36
Thank you so much Prof.
Thank you so much Prof. Cheung for your answers. It helps a lot. Really appreciate your time and consideration.
I have followed your suggestions, and estimated a similar model (attached figure), and the R code. But, the main full sample model gave "NA" in the 95% likelihood CIs in both the rerun() and the diag.constraints=FALSE methods.
In my moderator analysis, the structural model for studies with categorical model tc=0 works fine. But, the "NA" issue was more pronounced for the model with the tc=1. I have tried four different strategies.
In strategy 1 , I used diag.constraints=TRUE, intervals="LB" . I get NAs even after rerun().
In strategy 2, I used diag.constraints=FALSE, intervals="LB". I get NAs here as well, even after rerun().
In Strategy 3, I used just diag.constraints=FALSE. Works great, but I am not able to get CIs for indirect effects.
In strategy 4, I used just diag.constraints=TRUE. Works great, but I am not able to get CIs for indirect effects. Moreover, I get the following warning.
In vcov.wls(object, R = R) :
Parametric bootstrap with 50 replications was used to approximate the sampling covariance matrix of the parameter estimates. A better approach is to use likelihood-based confidence interval by including the intervals.type="LB" argument in the analysis.
Based on my analyses I have the following questions.
1) In your book, you suggest using diag.constraints=TRUE for mediated models. Given that diag.constraints=TRUE is giving me "NA", is it fine if I use FALSE (strategy3) as you suggest in the above response, even though mine is a mediated model? Can I report the standard errors?
2) How do I obtain the indirect effects while following this strategy 3?
3) If I follow the recommendation in your book, I might have to use strategy 4, but can I ignore the warning about the bootstrap? Even in this approach I am not able to obtain the indirect effect.
4) Is there a way I can increase the replication more than 50?
Sorry for bugging you with my detailed questions. I am desperate to use advanced methods in my papers. Thanks in advance for your time and consideration.
Regards,
Srikanth Parameswaran
File attachments:
Offline
Joined: 10/08/2009 - 22:37
The LBCI works fine on my
The LBCI works fine on my machine (see the attached file).
Strategy 1 was recommended in my book because diagonal constraints were required for mediation models in the wls() function at that time. After publishing my book, I managed to rewrite the wls() function to handle this issue. We may obtain SEs for all models with diag.constraints=FALSE. This is also the recommended method now.
If you use diag.constraints=TRUE with intervals="z", OpenMx does not report the SEs. wls() uses a parametric bootstrap to obtain the approximate SEs. You may increase the no. of bootstraps by calling summary(random2, R=100). There is a new mxSE() in OpenMx. You may use it to obtain the SEs using the delta method. Please see the attached examples.
Mike
Offline
Joined: 12/28/2016 - 15:36
Thanks a lot Mike for your
Sure, I will see the examples and understand in detail. However, I am not seeing the attachment in your post.
Offline
Joined: 10/08/2009 - 22:37
Oops! Here it is.
Oops! Here it is.
File attachments:
Offline
Joined: 12/28/2016 - 15:36
Thanks so much Mike!! Got the
Thanks so much Mike!! Got the attachment now!! Appreciate your great help.
Offline
Joined: 12/28/2016 - 15:36
Hi Mike,
Hi Mike,
I tried your latest and the recommended suggestion i.e. using diag.constraints= FALSE, intervals= "z". Everything works fine and looks great. It makes the whole process lot more easier.
1)I tried the moderator analysis also with the same method (attached R code). Again, everything works well. However, w.r.t the moderator analysis, I am getting an error on the subgroup.summary function. This happens only when I use the diag.constraints= FALSE, intervals= "z" option.
Error in if (pchisq(chi.squared, df = df, ncp = 0) >= upper) { :
missing value where TRUE/FALSE needed
Am I setting up the S3 or S3_high matrix wrong? Or am I missing something else? How do I get the subgroup.summary to work?
2) Also, in my moderator analysis 2, model "stage1_fam_high.fit" gave openmx status 5. I could only get it running with the acov="unweighted" option and the rerun() command. After this I got the opened status of 0, but the code says "Retry limit reached". So, are the stage1 estimated usable in stage 2, even though I got the OpenMx status 0?
In stage 2, though I could run both models I got the following error for "stage2_fam_high.fit".
Warning message:
In .solve(x = object$mx.fit@output$calculatedHessian, parameters = my.name) :
Error in solving the Hessian matrix. Generalized inverse is used. The standard errors may not be trustworthy.
You answered this earlier for me. However, can I ignore this warning, given that I am using diag.constraints=FALSE, intervals= “z”.
Thanks so much for your tremondous support so far. Any help with my queries will do great for me.
Regards,
Srikanth Parameswaran
File attachments:
Offline
Joined: 10/08/2009 - 22:37
It seems that the subgroup
1. It seems that the subgroup.summary() function is not from the metaSEM package. Could you please check with the original author?
2. Suppose that we would like to calculate the ACOV between r_ab and r_cd, we need r_ab, r_cd, r_ac, r_ad, r_bc, and r_bd to calculate it. If some of them are missing, the function may fail. Using the argument of either acov="unweighted" or acov="weighted" is more robust because it uses the average correlation matrix to calculate the ACOV. There are other issues in your "stage1_fam_high.fit", however. As you may see from the below, there are only 1 or 2 studies in some of these correlations. It is expected that the random-effects model will fail. Given this problem, the warning message is a fatal message indicating the problem of your data. It is always a good idea to check the data before running any statistical analyses.
> pattern.na(my.df6_fam_high, show.na=FALSE)
S P J T E
S 32 21 18 5 10
P 21 32 11 2 8
J 18 11 32 2 5
T 5 2 2 32 1
E 10 8 5 1 32
Offline
Joined: 12/28/2016 - 15:36
Sure, I will look into these issues.
Offline
Joined: 03/21/2017 - 02:58
Rerunning tssem2 to avoid NA in bounds
Hi Mike and other users,
I have a similar issue where I am getting "NA" for the upper and/or lower bounds of the indirect effect and the direct effect. Mike suggested rerunning tssem2, which solved the problem with one of my datasets:
random2 <- rerun(random2)
I would appreciate help with understanding what the program is doing that solves the problem.
I have other datasets I need to rerun analyses with. if I rerun once and the problem remains, does it make sense to rerun more times until the problem is solved? Is there a limit to how many times I should rerun analyses?
Thanks so much,
Mei Yi
Offline
Joined: 10/08/2009 - 22:37
Hi Mei Yi,
Hi Mei Yi,
rerun() in the metaSEM package is a wrapper of mxTryHard() in the OpenMx package. You may refer to its manual for the details.
I don’t think that there is a maximum no. of times you can run. You may increase the number of iterations by including the extraTries argument, say,
random2 <- rerun(random2, extraTries=20)
Mike
Offline
Joined: 03/21/2017 - 02:58
Does rerun solve Hessian matrix error & asyCov npd problem?
Hi Mike,
Many thanks for your quick response. I ran tssem2 for random effects model, and used diag.constraints=FALSE as recommended in your comment above. Although openmx status = 0 and all bounds were displayed (no NA), I got the following errors:
Error in wls(Cov = pooledS, asyCov = asyCov, n = tssem1.obj$total.n, Amatrix = Amatrix, : "asyCov" is not positive definite. In addition: Warning message: In .solve(x = object$mx.fit@output$calculatedHessian, parameters = my.name) : Error in solving the Hessian matrix. Generalized inverse is used. The standard errors may not be trustworthy. I tried the to rerun random2 (I did not need to use extraTries) and I did not see any more error/warning messages. I did see the following message: Begin fit attempt 1 of at maximum 11 tries Lowest minimum so far: 0.881166035181365 Solution found Running final fit, for Hessian and/or standard errors and/or confidence intervals Does this mean that the Hessian matrix error and asyCov npd problems are solved? Can I interpret the output from the rerun? Thanks so much, Mei Yi Offline Joined: 03/21/2017 - 02:58 Actually rerun did not work Hi Mike, Sorry, actually the random2 did not run after I got the asyCov npd and Hessian matrix error messages and rerun did not work because random2 was not created. I was actually running and rerunning random2 from the previous analysis, which was fine, but I did not realize this at first. So the problem was definitely not solved! tssem1 from this analysis had 5 out of 10 heterogeneity indices <1e-10, so I fixed them to zero, reran tssem1 with user-defined structure, then ran random2 and there was no more error/warning message, but there was NA bounds for the indirect and direct effects. A simple rerun of random2 (no extraTries needed) gave me numbers for lbound and ubound. I guess I have solved my problem with your advice in this and other threads/emails. The only question that remains is why there are so many tiny heterogeneity variances, I think I get more of them when I have fewer studies/matrices in my dataset (I had 7 in this one, with 645 participants). Thank you so much for your help! Mei Yi Offline Joined: 03/21/2017 - 02:58 If only Hessian matrix error, rerun seems to fix problem Hi Mike, I have one more update: if I only get the Hessian matrix error (but no asyCov npd error) then simple rerun of random2 (no extraTries) seems to solve the problem, and I think I can interpret the output. I've pasted the error message and "solution found" message below. Warning message: In .solve(x = object$mx.fit@output\$calculatedHessian, parameters = my.name) :
Error in solving the Hessian matrix. Generalized inverse is used. The standard errors may not be trustworthy.
Begin fit attempt 1 of at maximum 11 tries
Lowest minimum so far: 3.85332944966138
Solution found
Running final fit, for Hessian and/or standard errors and/or confidence intervals
If I should not interpret the output or am otherwise misunderstanding something, I would appreciate your clarification.
Thank you!
Mei Yi
Offline
Joined: 10/08/2009 - 22:37
Hi Mei Yi,
Hi Mei Yi,
Could you send me the code and data by email? I will take a look at it.
Mike
|
{}
|
Search results
Search: MSC category 47 ( Operator theory )
Expand all Collapse all Results 26 - 50 of 67
26. CJM 2008 (vol 60 pp. 1010)
Galé, José E.; Miana, Pedro J.
$H^\infty$ Functional Calculus and Mikhlin-Type Multiplier Conditions Let $T$ be a sectorial operator. It is known that the existence of a bounded (suitably scaled) $H^\infty$ calculus for $T$, on every sector containing the positive half-line, is equivalent to the existence of a bounded functional calculus on the Besov algebra $\Lambda_{\infty,1}^\alpha(\R^+)$. Such an algebra includes functions defined by Mikhlin-type conditions and so the Besov calculus can be seen as a result on multipliers for $T$. In this paper, we use fractional derivation to analyse in detail the relationship between $\Lambda_{\infty,1}^\alpha$ and Banach algebras of Mikhlin-type. As a result, we obtain a new version of the quoted equivalence. Keywords:functional calculus, fractional calculus, Mikhlin multipliers, analytic semigroups, unbounded operators, quasimultipliersCategories:47A60, 47D03, 46J15, 26A33, 47L60, 47B48, 43A22
27. CJM 2008 (vol 60 pp. 758)
Bercovici, H.; Foias, C.; Pearcy, C.
On the Hyperinvariant Subspace Problem. IV This paper is a continuation of three recent articles concerning the structure of hyperinvariant subspace lattices of operators on a (separable, infinite dimensional) Hilbert space $\mathcal{H}$. We show herein, in particular, that there exists a universal'' fixed block-diagonal operator $B$ on $\mathcal{H}$ such that if $\varepsilon>0$ is given and $T$ is an arbitrary nonalgebraic operator on $\mathcal{H}$, then there exists a compact operator $K$ of norm less than $\varepsilon$ such that (i) $\Hlat(T)$ is isomorphic as a complete lattice to $\Hlat(B+K)$ and (ii) $B+K$ is a quasidiagonal, $C_{00}$, (BCP)-operator with spectrum and left essential spectrum the unit disc. In the last four sections of the paper, we investigate the possible structures of the hyperlattice of an arbitrary algebraic operator. Contrary to existing conjectures, $\Hlat(T)$ need not be generated by the ranges and kernels of the powers of $T$ in the nilpotent case. In fact, this lattice can be infinite. Category:47A15
28. CJM 2008 (vol 60 pp. 520)
Chen, Chang-Pao; Huang, Hao-Wei; Shen, Chun-Yen
Matrices Whose Norms Are Determined by Their Actions on Decreasing Sequences Let $A=(a_{j,k})_{j,k \ge 1}$ be a non-negative matrix. In this paper, we characterize those $A$ for which $\|A\|_{E, F}$ are determined by their actions on decreasing sequences, where $E$ and $F$ are suitable normed Riesz spaces of sequences. In particular, our results can apply to the following spaces: $\ell_p$, $d(w,p)$, and $\ell_p(w)$. The results established here generalize ones given by Bennett; Chen, Luor, and Ou; Jameson; and Jameson and Lashkaripour. Keywords:norms of matrices, normed Riesz spaces, weighted mean matrices, Nörlund mean matrices, summability matrices, matrices with row decreasingCategories:15A60, 40G05, 47A30, 47B37, 46B42
29. CJM 2007 (vol 59 pp. 1207)
Bu, Shangquan; Le, Christian
$H^p$-Maximal Regularity and Operator Valued Multipliers on Hardy Spaces We consider maximal regularity in the $H^p$ sense for the Cauchy problem $u'(t) + Au(t) = f(t)\ (t\in \R)$, where $A$ is a closed operator on a Banach space $X$ and $f$ is an $X$-valued function defined on $\R$. We prove that if $X$ is an AUMD Banach space, then $A$ satisfies $H^p$-maximal regularity if and only if $A$ is Rademacher sectorial of type $<\frac{\pi}{2}$. Moreover we find an operator $A$ with $H^p$-maximal regularity that does not have the classical $L^p$-maximal regularity. We prove a related Mikhlin type theorem for operator valued Fourier multipliers on Hardy spaces $H^p(\R;X)$, in the case when $X$ is an AUMD Banach space. Keywords:$L^p$-maximal regularity, $H^p$-maximal regularity, Rademacher boundednessCategories:42B30, 47D06
30. CJM 2007 (vol 59 pp. 966)
Forrest, Brian E.; Runde, Volker; Spronk, Nico
Operator Amenability of the Fourier Algebra in the $\cb$-Multiplier Norm Let $G$ be a locally compact group, and let $A_{\cb}(G)$ denote the closure of $A(G)$, the Fourier algebra of $G$, in the space of completely bounded multipliers of $A(G)$. If $G$ is a weakly amenable, discrete group such that $\cstar(G)$ is residually finite-dimensional, we show that $A_{\cb}(G)$ is operator amenable. In particular, $A_{\cb}(\free_2)$ is operator amenable even though $\free_2$, the free group in two generators, is not an amenable group. Moreover, we show that if $G$ is a discrete group such that $A_{\cb}(G)$ is operator amenable, a closed ideal of $A(G)$ is weakly completely complemented in $A(G)$ if and only if it has an approximate identity bounded in the $\cb$-multiplier norm. Keywords:$\cb$-multiplier norm, Fourier algebra, operator amenability, weak amenabilityCategories:43A22, 43A30, 46H25, 46J10, 46J40, 46L07, 47L25
31. CJM 2007 (vol 59 pp. 614)
Labuschagne, C. C. A.
Preduals and Nuclear Operators Associated with Bounded, $p$-Convex, $p$-Concave and Positive $p$-Summing Operators We use Krivine's form of the Grothendieck inequality to renorm the space of bounded linear maps acting between Banach lattices. We construct preduals and describe the nuclear operators associated with these preduals for this renormed space of bounded operators as well as for the spaces of $p$-convex, $p$-concave and positive $p$-summing operators acting between Banach lattices and Banach spaces. The nuclear operators obtained are described in terms of factorizations through classical Banach spaces via positive operators. Keywords:$p$-convex operator, $p$-concave operator, $p$-summing operator, Banach space, Banach lattice, nuclear operator, sequence spaceCategories:46B28, 47B10, 46B42, 46B45
32. CJM 2007 (vol 59 pp. 638)
MacDonald, Gordon W.
Distance from Idempotents to Nilpotents We give bounds on the distance from a non-zero idempotent to the set of nilpotents in the set of $n\times n$ matrices in terms of the norm of the idempotent. We construct explicit idempotents and nilpotents which achieve these distances, and determine exact distances in some special cases. Keywords:operator, matrix, nilpotent, idempotent, projectionCategories:47A15, 47D03, 15A30
33. CJM 2007 (vol 59 pp. 393)
Servat, E.
Le splitting pour l'opérateur de Klein--Gordon: une approche heuristique et numérique Dans cet article on \'etudie la diff\'erence entre les deux premi\eres valeurs propres, le splitting, d'un op\'erateur de Klein--Gordon semi-classique unidimensionnel, dans le cas d'un potentiel sym\'etrique pr\'esentant un double puits. Dans le cas d'une petite barri\ere de potentiel, B. Helffer et B. Parisse ont obtenu des r\'esultats analogues \a ceux existant pour l'op\'erateur de Schr\"odinger. Dans le cas d'une grande barri\ere de potentiel, on obtient ici des estimations des tranform\'ees de Fourier des fonctions propres qui conduisent \a une conjecture du splitting. Des calculs num\'eriques viennent appuyer cette conjecture. Categories:35P05, 34L16, 34E05, 47A10, 47A70
34. CJM 2006 (vol 58 pp. 859)
Nonstandard Ideals from Nonstandard Dual Pairs for $L^1(\omega)$ and $l^1(\omega)$ The Banach convolution algebras $l^1(\omega)$ and their continuous counterparts $L^1(\bR^+,\omega)$ are much studied, because (when the submultiplicative weight function $\omega$ is radical) they are pretty much the prototypic examples of commutative radical Banach algebras. In cases of nice'' weights $\omega$, the only closed ideals they have are the obvious, or standard'', ideals. But in the general case, a brilliant but very difficult paper of Marc Thomas shows that nonstandard ideals exist in $l^1(\omega)$. His proof was successfully exported to the continuous case $L^1(\bR^+,\omega)$ by Dales and McClure, but remained difficult. In this paper we first present a small improvement: a new and easier proof of the existence of nonstandard ideals in $l^1(\omega)$ and $L^1(\bR^+,\omega)$. The new proof is based on the idea of a nonstandard dual pair'' which we introduce. We are then able to make a much larger improvement: we find nonstandard ideals in $L^1(\bR^+,\omega)$ containing functions whose supports extend all the way down to zero in $\bR^+$, thereby solving what has become a notorious problem in the area. Keywords:Banach algebra, radical, ideal, standard ideal, semigroupCategories:46J45, 46J20, 47A15
35. CJM 2006 (vol 58 pp. 548)
Hausdorff and Quasi-Hausdorff Matrices on Spaces of Analytic Functions We consider Hausdorff and quasi-Hausdorff matrices as operators on classical spaces of analytic functions such as the Hardy and the Bergman spaces, the Dirichlet space, the Bloch spaces and $\BMOA$. When the generating sequence of the matrix is the moment sequence of a measure $\mu$, we find the conditions on $\mu$ which are equivalent to the boundedness of the matrix on the various spaces. Categories:47B38, 46E15, 40G05, 42A20
Strictly Singular and Cosingular Multiplications Let $L(X)$ be the space of bounded linear operators on the Banach space $X$. We study the strict singularity andcosingularity of the two-sided multiplication operators $S \mapsto ASB$ on $L(X)$, where $A,B \in L(X)$ are fixed bounded operators and $X$ is a classical Banach space. Let $1 Categories:47B47, 46B28 37. CJM 2005 (vol 57 pp. 771) Schrohe, E.; Seiler, J. The Resolvent of Closed Extensions of Cone Differential Operators We study closed extensions$\underline A$of an elliptic differential operator$A$on a manifold with conical singularities, acting as an unbounded operator on a weighted$L_p$-space. Under suitable conditions we show that the resolvent$(\lambda-\underline A)^{-1}$exists in a sector of the complex plane and decays like$1/|\lambda|$as$|\lambda|\to\infty$. Moreover, we determine the structure of the resolvent with enough precision to guarantee existence and boundedness of imaginary powers of$\underline A$. As an application we treat the Laplace--Beltrami operator for a metric with straight conical degeneracy and describe domains yielding maximal regularity for the Cauchy problem$\dot{u}-\Delta u=f$,$u(0)=0$. Keywords:Manifolds with conical singularities, resolvent, maximal regularityCategories:35J70, 47A10, 58J40 38. CJM 2005 (vol 57 pp. 506) Gross, Leonard; Grothaus, Martin Reverse Hypercontractivity for Subharmonic Functions Contractivity and hypercontractivity properties of semigroups are now well understood when the generator,$A$, is a Dirichlet form operator. It has been shown that in some holomorphic function spaces the semigroup operators,$e^{-tA}$, can be bounded {\it below} from$L^p$to$L^q$when$p,q$and$t$are suitably related. We will show that such lower boundedness occurs also in spaces of subharmonic functions. Keywords:Reverse hypercontractivity, subharmonicCategories:58J35, 47D03, 47D07, 32Q99, 60J35 39. CJM 2005 (vol 57 pp. 225) Booss-Bavnbek, Bernhelm; Lesch, Matthias; Phillips, John Unbounded Fredholm Operators and Spectral Flow We study the gap (= projection norm'' = graph distance'') topology of the space of all (not necessarily bounded) self-adjoint Fredholm operators in a separable Hilbert space by the Cayley transform and direct methods. In particular, we show the surprising result that this space is connected in contrast to the bounded case. Moreover, we present a rigorous definition of spectral flow of a path of such operators (actually alternative but mutually equivalent definitions) and prove the homotopy invariance. As an example, we discuss operator curves on manifolds with boundary. Categories:58J30, 47A53, 19K56, 58J32 40. CJM 2005 (vol 57 pp. 61) Binding, Paul; Strauss, Vladimir On Operators with Spectral Square but without Resolvent Points Decompositions of spectral type are obtained for closed Hilbert space operators with empty resolvent set, but whose square has closure which is spectral. Krein space situations are also discussed. Keywords:unbounded operators, closed operators,, spectral resolution, indefinite metricCategories:47A05, 47A15, 47B40, 47B50, 46C20 41. CJM 2004 (vol 56 pp. 742) Jiang, Chunlan Similarity Classification of Cowen-Douglas Operators Let$\cal H$be a complex separable Hilbert space and${\cal L}({\cal H})$denote the collection of bounded linear operators on${\cal H}$. An operator$A$in${\cal L}({\cal H})$is said to be strongly irreducible, if${\cal A}^{\prime}(T)$, the commutant of$A$, has no non-trivial idempotent. An operator$A$in${\cal L}({\cal H})$is said to a Cowen-Douglas operator, if there exists$\Omega$, a connected open subset of$C$, and$n$, a positive integer, such that (a)${\Omega}{\subset}{\sigma}(A)=\{z{\in}C; A-z {\text {not invertible}}\};$(b)$\ran(A-z)={\cal H}$, for$z$in$\Omega$; (c)$\bigvee_{z{\in}{\Omega}}$\ker$(A-z)={\cal H}$and (d)$\dim \ker(A-z)=n$for$z$in$\Omega$. In the paper, we give a similarity classification of strongly irreducible Cowen-Douglas operators by using the$K_0$-group of the commutant algebra as an invariant. Categories:47A15, 47C15, 13E05, 13F05 42. CJM 2004 (vol 56 pp. 277) Dostanić, Milutin R. Spectral Properties of the Commutator of Bergman's Projection and the Operator of Multiplication by an Analytic Function It is shown that the singular values of the operator$aP-Pa$, where$P$is Bergman's projection over a bounded domain$\Omega$and$a$is a function analytic on$\bar{\Omega}$, detect the length of the boundary of$a(\Omega)$. Also we point out the relation of that operator and the spectral asymptotics of a Hankel operator with an anti-analytic symbol. Category:47B10 43. CJM 2004 (vol 56 pp. 134) Li, Chi-Kwong; Sourour, Ahmed Ramzi Linear Operators on Matrix Algebras that Preserve the Numerical Range, Numerical Radius or the States Every norm$\nu$on$\mathbf{C}^n$induces two norm numerical ranges on the algebra$M_n$of all$n\times n$complex matrices, the spatial numerical range $$W(A)= \{x^*Ay : x, y \in \mathbf{C}^n,\nu^D(x) = \nu(y) = x^*y = 1\},$$ where$\nu^D$is the norm dual to$\nu$, and the algebra numerical range $$V(A) = \{ f(A) : f \in \mathcal{S} \},$$ where$\mathcal{S}$is the set of states on the normed algebra$M_n$under the operator norm induced by$\nu$. For a symmetric norm$\nu$, we identify all linear maps on$M_n$that preserve either one of the two norm numerical ranges or the set of states or vector states. We also identify the numerical radius isometries, {\it i.e.}, linear maps that preserve the (one) numerical radius induced by either numerical range. In particular, it is shown that if$\nu$is not the$\ell_1$,$\ell_2$, or$\ell_\infty$norms, then the linear maps that preserve either numerical range or either set of states are inner'', {\it i.e.}, of the form$A\mapsto Q^*AQ$, where$Q$is a product of a diagonal unitary matrix and a permutation matrix and the numerical radius isometries are unimodular scalar multiples of such inner maps. For the$\ell_1$and the$\ell_\infty$norms, the results are quite different. Keywords:Numerical range, numerical radius, state, isometryCategories:15A60, 15A04, 47A12, 47A30 44. CJM 2003 (vol 55 pp. 1264) Havin, Victor; Mashreghi, Javad Admissible Majorants for Model Subspaces of$H^2$, Part II: Fast Winding of the Generating Inner Function This paper is a continuation of \cite{HM02I}. We consider the model subspaces$K_\Theta=H^2\ominus\Theta H^2$of the Hardy space$H^2$generated by an inner function$\Theta$in the upper half plane. Our main object is the class of admissible majorants for$K_\Theta$, denoted by$\Adm \Theta$and consisting of all functions$\omega$defined on$\mathbb{R}$such that there exists an$f \ne 0$,$f \in K_\Theta$satisfying$|f(x)|\leq\omega(x)$almost everywhere on$\mathbb{R}$. Firstly, using some simple Hilbert transform techniques, we obtain a general multiplier theorem applicable to any$K_\Theta$generated by a meromorphic inner function. In contrast with \cite{HM02I}, we consider the generating functions$\Theta$such that the unit vector$\Theta(x)$winds up fast as$x$grows from$-\infty$to$\infty$. In particular, we consider$\Theta=B$where$B$is a Blaschke product with horizontal'' zeros, {\it i.e.}, almost uniformly distributed in a strip parallel to and separated from$\mathbb{R}$. It is shown, among other things, that for any such$B$, any even$\omega$decreasing on$(0,\infty)$with a finite logarithmic integral is in$\Adm B$(unlike the vertical'' case treated in \cite{HM02I}), thus generalizing (with a new proof) a classical result related to$\Adm\exp(i\sigma z)$,$\sigma>0$. Some oscillating$\omega$'s in$\Adm B$are also described. Our theme is related to the Beurling-Malliavin multiplier theorem devoted to$\Adm\exp(i\sigma z)$,$\sigma>0$, and to de~Branges' space$\mathcal{H}(E)$. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 45. CJM 2003 (vol 55 pp. 1231) Havin, Victor; Mashreghi, Javad Admissible Majorants for Model Subspaces of$H^2$, Part I: Slow Winding of the Generating Inner Function A model subspace$K_\Theta$of the Hardy space$H^2 = H^2 (\mathbb{C}_+)$for the upper half plane$\mathbb{C}_+$is$H^2(\mathbb{C}_+) \ominus \Theta H^2(\mathbb{C}_+)$where$\Theta$is an inner function in$\mathbb{C}_+$. A function$\omega \colon \mathbb{R}\mapsto[0,\infty)$is called {\it an admissible majorant\/} for$K_\Theta$if there exists an$f \in K_\Theta$,$f \not\equiv 0$,$|f(x)|\leq \omega(x)$almost everywhere on$\mathbb{R}$. For some (mainly meromorphic)$\Theta$'s some parts of$\Adm\Theta$(the set of all admissible majorants for$K_\Theta$) are explicitly described. These descriptions depend on the rate of growth of$\arg \Theta$along$\mathbb{R}$. This paper is about slowly growing arguments (slower than$x$). Our results exhibit the dependence of$\Adm B$on the geometry of the zeros of the Blaschke product$B$. A complete description of$\Adm B$is obtained for$B$'s with purely imaginary (`vertical'') zeros. We show that in this case a unique minimal admissible majorant exists. Keywords:Hardy space, inner function, shift operator, model, subspace, Hilbert transform, admissible majorantCategories:30D55, 47A15 46. CJM 2003 (vol 55 pp. 449) Albeverio, Sergio; Makarov, Konstantin A.; Motovilov, Alexander K. Graph Subspaces and the Spectral Shift Function We obtain a new representation for the solution to the operator Sylvester equation in the form of a Stieltjes operator integral. We also formulate new sufficient conditions for the strong solvability of the operator Riccati equation that ensures the existence of reducing graph subspaces for block operator matrices. Next, we extend the concept of the Lifshits-Krein spectral shift function associated with a pair of self-adjoint operators to the case of pairs of admissible operators that are similar to self-adjoint operators. Based on this new concept we express the spectral shift function arising in a perturbation problem for block operator matrices in terms of the angular operators associated with the corresponding perturbed and unperturbed eigenspaces. Categories:47B44, 47A10, 47A20, 47A40 47. CJM 2003 (vol 55 pp. 379) Stessin, Michael; Zhu, Kehe Generalized Factorization in Hardy Spaces and the Commutant of Toeplitz Operators Every classical inner function$\varphi$in the unit disk gives rise to a certain factorization of functions in Hardy spaces. This factorization, which we call the generalized Riesz factorization, coincides with the classical Riesz factorization when$\varphi(z)=z$. In this paper we prove several results about the generalized Riesz factorization, and we apply this factorization theory to obtain a new description of the commutant of analytic Toeplitz operators with inner symbols on a Hardy space. We also discuss several related issues in the context of the Bergman space. Categories:47B35, 30D55, 47A15 48. CJM 2002 (vol 54 pp. 1142) Binding, Paul; Ćurgus, Branko Form Domains and Eigenfunction Expansions for Differential Equations with Eigenparameter Dependent Boundary Conditions Form domains are characterized for regular$2n$-th order differential equations subject to general self-adjoint boundary conditions depending affinely on the eigenparameter. Corresponding modes of convergence for eigenfunction expansions are studied, including uniform convergence of the first$n-1$derivatives. Categories:47E05, 34B09, 47B50, 47B25, 34L10 49. CJM 2002 (vol 54 pp. 998) Dimassi, Mouez Resonances for Slowly Varying Perturbations of a Periodic Schrödinger Operator We study the resonances of the operator$P(h) = -\Delta_x + V(x) + \varphi(hx)$. Here$V$is a periodic potential,$\varphi$a decreasing perturbation and$h$a small positive constant. We prove the existence of shape resonances near the edges of the spectral bands of$P_0 = -\Delta_x + V(x)$, and we give its asymptotic expansions in powers of$h^{\frac12}$. Categories:35P99, 47A60, 47A40 50. CJM 2001 (vol 53 pp. 1031) Sampson, G.; Szeptycki, P. The Complete$(L^p,L^p)$Mapping Properties of Some Oscillatory Integrals in Several Dimensions We prove that the operators$\int_{\mathbb{R}_+^2} e^{ix^a \cdot y^b} \varphi (x,y) f(y)\, dy$map$L^p(\mathbb{R}^2)$into itself for$p \in J =\bigl[\frac{a_l+b_l}{a_l+(\frac{b_l r}{2})},\frac{a_l+b_l} {a_l(1-\frac{r}{2})}\bigr]$if$a_l,b_l\ge 1$and$\varphi(x,y)=|x-y|^{-r}$,$0\le r <2$, the result is sharp. Generalizations to dimensions$d>2\$ are indicated. Categories:42B20, 46B70, 47G10
|
{}
|
Torsion line bundles with non-vanishing cohomology on smooth ACM surfaces - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T12:13:09Z http://mathoverflow.net/feeds/question/7914 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/7914/torsion-line-bundles-with-non-vanishing-cohomology-on-smooth-acm-surfaces Torsion line bundles with non-vanishing cohomology on smooth ACM surfaces Hailong Dao 2009-12-05T22:39:11Z 2009-12-09T18:13:02Z <p>I am looking for an example of a smooth surface $X$ with a fixed very ample $\mathcal O_X(1)$ such that $H^1(\mathcal O(k))=0$ for all $k$ (such thing is called an ACM surface, I think) and a <S>globally generated</S> line bundle $L$ such that $L$ is torsion in $Pic(X)$ and $H^1(L) \neq 0$. </p> <p>Does such surface exist? How can I construct one if it does exist? What if one ask for even nicer surface, such as arithmetically Gorenstein? If not, then I am willing to drop smooth or globally generated, but would like to keep the torsion condition. </p> <p>More motivations(thanks Andrew): Such a line bundle would give a cyclic cover of $X$ which is not ACM, which would be of interest to me. I suppose one can think of this as a special counter example to a weaker (CM) version of purity of branch locus. </p> <p>To the best of my knowledge this is not a homework question (: But I do not know much geometry, so may be some one can tell me where to find an answer. Thanks.</p> <p>EDIT: Removed the global generation condition, by Dmitri's answer. I realized I did not really need it that much. </p> http://mathoverflow.net/questions/7914/torsion-line-bundles-with-non-vanishing-cohomology-on-smooth-acm-surfaces/8355#8355 Answer by Dmitri for Torsion line bundles with non-vanishing cohomology on smooth ACM surfaces Dmitri 2009-12-09T14:51:01Z 2009-12-09T14:51:01Z <p>Let us show that a globaly generated torsion line bundle $L$ on a (compact) complex surface is trivial. Ideed, a globally generated line bundle has at least one section, say $s$. Let us take it. If $s$ has no zeros, then $L$ is trivial. But if $s$ vanishes somewhere then any positive power $L^n$ has a section $s^n$ that vanishes at the same points. So any power of $L$ is not trivial, i.e. $L$ is not a torsion bundle, contradiction.</p> <p>Notice that we did not use the fact that the surface is smooth. And we also did not use the fact that we work with a surface...</p>
|
{}
|
# How to show that the Complete Elliptic Integral of the First Kind increases in m?
How can you show that the complete elliptic integral of first kind $\displaystyle K(m)=\int_0^\frac{\pi}{2}\frac{\mathrm du}{\sqrt{1-m^2\sin^2 u}}$ that is the same as a series $$K(m)=\frac{\pi}{2} \left(1+\left(\frac{1}{2}\right)^{2}m^2 +\left(\frac{1\cdot 3}{2\cdot 4}\right)^{2}m^4 +...+ \left(\frac{(2n-1)!!}{2n!!} \right )^2m^{2n} + ... \right)$$
increases in m?
Thanks
-
The expansion of the integral you wrote has terms for every odd power of $m$ as well. – Did Dec 8 '12 at 20:56
Im sorry, can you explain again? something wrong on the expansion? – JHughes Dec 8 '12 at 21:58
Yes, something was definitely wrong with the expansion... But it seems you saw the problem since you made the necessary correction. Note that this makes the accepted answer, which addresses (incorrectly) the original version of your question, a little odd. – Did Dec 8 '12 at 23:42
yeah yeah, thx, i already correct it. – JHughes Dec 11 '12 at 1:24
You can show that the derivative with respect to $m$ is always positive:
Note that $$K'(m)=\int_0^\frac{\pi}{2}\frac{m \sin^2 u\, du}{(1-m^2\sin^2 u)^{3/2}} \geq 0$$ as the integrand is positive for all $0\leq m \leq 1$.
The numerator $m\sin u$ should read $\frac12\sin^2u$. – Did Dec 8 '12 at 23:23
|
{}
|
lilypond-user
[Top][All Lists]
## Re: One line of lyrics, then two lines -> vertical alignment
From: James Bailey Subject: Re: One line of lyrics, then two lines -> vertical alignment Date: Sun, 22 Aug 2010 23:22:28 +0200
```On Aug 22, 2010, at 10:35 PM, Frank Steinmetzger wrote:
> Hello List
>
> I have grazed the ML archive and I (believe to) remember to have seen this
> question before, but I couldn’t find anything, only other meanings of
> ”vertical alignment of lyrics“. So I’m just asking again. :-)
>
> See the attached image of a page I’d like to set. It is page 2 of a song,
> which shows that the first part of the song has three stanzas, placed in
> three
> lines of lyrics. The forth stanza follows as a single lyrics line.
>
> I have already achieved this by using three \lyricsto, where the first one
> adds stanza 1, the second one adds stanza 2 and 4, and the third adds stanza
> 3.
>
> My problem now is in the last system, where the lyrics part in two lines. I
> was unable to figure out how to achieve in it the way this page shows it.
>
|
{}
|
# Strategy and Outcomes
The Equality and Diversity Strategy, Outcomes and Action Plan and how the University vision is a continuing commitment to equality and diversity for both students and staff.
## Strategy
This is a single equality strategy to ensure that equality and diversity are guiding principles in our pursuit of academic excellence. Its introduction coincides with the implementation of the Equality Act 2010 and builds on its principle of integrating equality and diversity in policy and practice.
Equality and Diversity Strategy (PDF) (under review)
## Equality Outcomes and Mainstreaming Progress report 2019
This document sets out the University’s combined Equality Outcomes and Mainstreaming progress reporting, for the period 30 April 2017 – 30 April 2019. It gives highlights of the University’s progress in embedding its equality duties and provides links to employee and student equality data.
## Equality Outcomes and Actions 2017-2021
The University has set challenging Equality Outcomes and associated actions for the period 2017-21, to further the University’s strategic priorities and Equality and Diversity Strategy, and to meet the requirements of the Scottish regulations under the Equality Act 2010.
The University’s latest report on Mainstreaming the Equality Duty describes progress in making the general equality duty integral to the exercise of our functions, and includes progress made on our Equality Outcomes Actions as at April 2019.
## Mainstreaming Equality and Progress Report 2017
The University’s latest report on Mainstreaming the Equality Duty describes progress in making the general equality duty integral to the exercise of our functions, and includes progress made on our Equality Outcomes Actions as at April 2017.
## Employment Information
Further employment information,including monitoring and statistcal reports; Equality Pay Audits, and the Universty's Equal Pay Statement, can be viewed at thr following link:
Monitoring and Statistics
## Archived Equality Outcomes and Mainstreaming Reports
Equality Outcomes and Actions April 2013-17 (PDF)
Equality Outcomes and Actions Progress Report April 2015 (PDF)
Mainstreaming the Equality Duty April 2013(PDF)
Mainstreaming the Equality Duty Progress Report April 2015 (PDF)
### Our Vision
The University of Edinburgh has a distinguished history of scholarship and endeavour that has contributed greatly to our society’s intellectual and economic advancement.
Our continuing commitment to equality and diversity has a vital role to play in ensuring our success as a great civic institution for both students and staff.
### Our aspirations
We aspire to be a place of first choice for some of the worlds most talented students and gifted staff.
The University is committed to developing a positive culture, where all staff and students are able to develop to their full potential.
### Our commitment
The University is committed to embedding Equality and Diversity across all its work, and believes its strategy reflects its commitment and contribution to its place as a world-leading centre of academic excellence.
|
{}
|
## CryptoDB
### Paper: Efficient Invisible and Unlinkable Sanitizable Signatures
Authors: Xavier Bultel Pascal Lafourcade Russell W. F. Lai Giulio Malavolta Dominique Schröder Sri Aravinda Krishnan Thyagarajan DOI: 10.1007/978-3-030-17253-4_6 Search ePrint Search Google PKC 2019 Sanitizable signatures allow designated parties (the sanitizers) to apply arbitrary modifications to some restricted parts of signed messages. A secure scheme should not only be unforgeable, but also protect privacy and hold both the signer and the sanitizer accountable. Two important security properties that are seemingly difficult to achieve simultaneously and efficiently are invisibility and unlinkability. While invisibility ensures that the admissible modifications are hidden from external parties, unlinkability says that sanitized signatures cannot be linked to their sources. Achieving both properties simultaneously is crucial for applications where sensitive personal data is signed with respect to data-dependent admissible modifications. The existence of an efficient construction achieving both properties was recently posed as an open question by Camenisch et al. (PKC’17). In this work, we propose a solution to this problem with a two-step construction. First, we construct (non-accountable) invisible and unlinkable sanitizable signatures from signatures on equivalence classes and other basic primitives. Second, we put forth a generic transformation using verifiable ring signatures to turn any non-accountable sanitizable signature into an accountable one while preserving all other properties. When instantiating in the generic group and random oracle model, the efficiency of our construction is comparable to that of prior constructions, while providing stronger security guarantees.
##### BibTeX
@inproceedings{pkc-2019-29280,
title={Efficient Invisible and Unlinkable Sanitizable Signatures},
booktitle={Public-Key Cryptography – PKC 2019},
series={Lecture Notes in Computer Science},
publisher={Springer},
volume={11442},
pages={159-189},
doi={10.1007/978-3-030-17253-4_6},
author={Xavier Bultel and Pascal Lafourcade and Russell W. F. Lai and Giulio Malavolta and Dominique Schröder and Sri Aravinda Krishnan Thyagarajan},
year=2019
}
|
{}
|
# How can I prove that $\frac{m^2+1}{\left|m+\frac12\right|}$ is a rational number for all $m \in \mathbb{Z}$?
How can I prove that $$\frac{m^2+1}{\left|m+\frac12\right|}$$ is a rational number for all $m \in \mathbb{Z}$?
I know that the numerator is rational number but the denominator is always a rational number for any $m \in \mathbb{Z}$.
• The rationals are closed under multiplication, division, addition, and subtraction by definition. – The Great Duck Aug 18 '17 at 3:56
• @JohnColeman A common definition of $\mathbb{Q}$ is as the field of fractions of $\mathbb{Z}$. Depending on how you define $\mathbb{R}$, it might not make sense to define $\mathbb{Q}$ using the concept 'real number'. – jwg Aug 18 '17 at 13:08
• @JohnColeman jwg's point is exactly what I'm saying. You don't have to necessarily say it is all numbers expressed as that ratio. It can also be the "rational set" or "field of fractions" which for all algebraic number rings is already defined to be a field. Regardless, my point was to say the question was trivial to the point that any answer will just say "by definition". I suspect that the asker might want something deeper than just the properties of rational numbers. – The Great Duck Aug 18 '17 at 14:58
• @JohnColeman Think of it this way: Define the minimal extension upon the integers such that the resulting set is a field. That's the rational numbers. Now you might say that we have to prove the two definitions are equivalent but we don't as some people will define the rational numbers by my above statement. Is it wrong? Probably...? But does it actually matter that much? After all, being closed under those things does follow directly from its definition either way. – The Great Duck Aug 18 '17 at 15:00
• @JohnColeman in all proofs there is a concept of "previous knowledge" or "context". If the author wants them to demonstrate the rationals are closed, that's one thing. However, most will just say they are as it's pretty well accepted at this point. I just had such a course this last spring and probably had a similar question (I don't know for but I'm just guessing) and never were we asked to actually show the rationals were closed. It might be brought up in-class if people didn't know that, but it was considered self-evident. – The Great Duck Aug 18 '17 at 15:04
If you mutiply the top and bottom of your fraction by $2$, you get:
$$\frac{m^2+1}{|m+\frac12|} = \frac{2(m^2+1)}{|2m+1|},$$
which is a ratio of two integers, and we don't have to worry about the denominator being $0$, because that's not possible for an integer $m$. In general, to show that the quotient of two rationals is rational, you just need to clear denominators:
$$\frac{a/b}{c/d} = \frac{a/b}{c/d}\cdot\frac{bd}{bd} = \frac{ad}{bc}.$$
• It may also be important to note that $|2m+1|$ is never equal to $0$ for $m\in\mathbb{Z}$. – Barry Cipra Aug 17 '17 at 20:03
• Fair enough. Edited, although if we know that the original is defined at all, that's already taken care of, really. – G Tony Jacobs Aug 17 '17 at 20:11
• @GTonyJacobs I do appreciate showing that the denominator is non-zero. If the question were "For which $m\in\Bbb Z$ is $\frac{m^2+1}{|m+1|}$ a rational number?" you'd definitely have to do the check. – Hagen von Eitzen Aug 18 '17 at 9:01
The quotient of two rationals is always a rational number.
• yeah I know it bur how can I prove that? – Eii Aug 17 '17 at 20:00
• Let $p=a/b$ and $q=c/d$. Simply compute $p/q$ and use the definition of a rational number. – szw1710 Aug 17 '17 at 20:14
• @MichaelHardy yeah but also c must not be 0 right? – Eii Aug 17 '17 at 20:49
• @MichaelHardy That's not exactly what arithmetic would tell us. :p – Adayah Aug 17 '17 at 20:55
• @eli : Yes. Actually $a$ is the only one that could be $0. \qquad$ – Michael Hardy Aug 17 '17 at 23:52
You can either prove the rationals are closed under the four arithmetic operations, or in your specific case you can just demonstrate it. If $m \gt 0$ your fraction is $\frac {m^2+1}{m+\frac 12}=\frac {2m^2+2}{2m+1}$ and we have displayed two integers you can divide to get your number. The denominator is never zero because of the $\frac 12$. The case $m \lt 0$ is similar.
• Except when you are dividing by $0$. – Kenny Lau Aug 17 '17 at 20:23
Use the fact that $\mathbb{Q}$ is a field thus if $k \in \mathbb{Q},k \neq 0$ then $k^{-1}=\frac{1}{k} \in \mathbb{Q}$
and that if $a,b \in \mathbb{Q}$ then $ab \in \mathbb{Q}$
• Except when $k=0$. – Kenny Lau Aug 17 '17 at 20:23
• Yes indeed thank you for point it out... – Marios Gretsas Aug 17 '17 at 20:26
|
{}
|
# math.exe vs wolfram.exe vs MathKernel.exe vs WolframKernel.exe (running scripts on Windows)
I generally consider Mma documentation very good. However, it is startlingly difficult to find useful information on running Mma batch scripts, especially on Windows. So ... just what would I enter is the Wolfram documentation search dialog to fetch this information? Here are things that don't work: math, math.exe, wolfram.exe, wolfram, MathKernel, and -script. Now WolframKernel brings up a page, but it is not useful. It does provide a promising link to http://reference.wolfram.com/language/tutorial/WolframLanguageScripts.html which is not completely devoid of information but (intentionally?) does not address Windows users. Worse, it suggests that MathKernel and math are equivalent, which (on Windows at least) is manifestly untrue.
So here is what I currently believe, not from the docs, but from looking at the binaries in my Mma folder and trying them out.
• math.exe and wolfram.exe are the same and can be used with the -script option to run scripts
• MathKernel.exe and WolframKernel.exe are the same but (contrary to the Wolfram page above) are different from the other two commands in some unspecified ways and (again contrary to the page) cannot but used with the -script option to run scripts. (Or at least, contrary to that page, output intended for stdout does not go there.)
So, what are these 4 files actually, and where are they documented, and where in particular is the documentation for Windows users as to how to use them?
These two small programs -- math.exe is the same as wolfram.exe, and MathKernel.exe is the same as WolframKernel.exe -- are kernel loaders, which provide an interface to the same main kernel code residing in the dynamic library WolframEngine.dll (also known as mathdll.dll in versions prior to 10.1.0).
Both accept the same command line options as documented for wolfram and WolframKernel. The main difference is that wolfram.exe is a console application, while WolframKernel.exe is a GUI, windowed application offering some basic copy and paste, font selection and scrolling functionality.
The standard behavior of the Windows OS is that upon launching a console application, it is attached to the console of the parent process (for example, cmd.exe) if present, otherwise a new console is created. That is not the case for GUI applications, which by default run without a console, and therefore do not have stdin, stdout and stderr.
While the latter is not impossible to code around, it is hardly necessary in this case, as wolfram.exe is provided as a true console application suitable for scripting purposes.
The kernel launchers organization is slightly different on other operating systems: on Linux, math/wolfram/MathKernel/WolframKernel are all the same shell script, which launches the kernel loader binary. On MacOS X, there is only a MathKernel/WolframKernel binary loader.
Something to keep in mind about the documentation that it is the same on all platforms, and so is written as much as possible in a general and platform-independent manner, without delving into OS specifics in many cases.
• Can the information you provide somehow be extracted from the available docs? If not, can you indicate your source? Thanks! – Alan May 21 '15 at 18:08
• Actually I think none of it is so arcane that it couldn't be deduced by reading the documentation, inspecting the layout and launching these executables plus certain familiarity with Windows. I hope my answer is still useful in some way and doesn't sound like complete guesswork. – ilian May 21 '15 at 18:39
• My query was not meant in any way to challenge your description. As you can see, my question engaged in similar inferences about these executables. That said, what did you mean by "inspecting the layout"? Thanks. – Alan May 21 '15 at 20:09
• The same as in the question really -- just looking at the files installed, trying them out etc. But I agree the documentation definitely could be tweaked to be more helpful to Windows standalone kernel users. – ilian May 21 '15 at 20:25
• @Vladimir Nowadays, the kernel has broader functionality, and powers products other than Mathematica, hence the new name. MathKernel.exe is kept for backward compatibility. – ilian Jun 2 '15 at 19:44
|
{}
|
# Formula For Curl Of Vec 64 Patch Torrent Activator Full Iso Ultimate
In this lesson you will find the curl of a vector field in three different coordinate systems. A method for generating the curl formula in each of.... Curl in two dimensions · Line integrals in a vector field ... Specifically, (drumroll please), Here's the formula defining two-dimen
.
(1.13); |; The flow velocity of a fluid is a vector field, ... Learn that the equations of motion for irrotational ow reduce to a single partial ...
158) This is a list of some vector calculus formulae of general use in working ... Curl of a second-order tensor field in cylindrical coordinate system.. the ...
Vorticity is mathematically defined as the curl of the velocity field and is hence a measure of local rotation of the fluid.. This definition makes it a vector ...
Example 7 – Finding the Curl of a Vector Field.. Find curl F of the vector field ... Begin by writing a parametric form of the equation of the line segment:.
Maxwell's equations relating the electric field E and magnetic field H as ... This exercise demonstrates a connection between the curl vector and rotations.
โดย HV Dannon · 2013 · อ้างโดย1 — Non-Archimedean, Calculus, Limit,.. Continuity, Derivative, Integral, Gradient, Divergence, Curl,.. Maxwell Equations, Electrodynamics, Electromagnetic.
To find the angle between a and b, use the formula a .. b = lallbl cos e.. Vector Calculus Solution Manual [6nq88xp712nw].. Unlike static PDF Vector Calculus ...
โดย J Williams · 2016 · อ้างโดย1 — Keywords.
Vectors; vector fields; curl; divergence; Divergence Theorem .. Episode 161.mp4 - Google Drive
## curl of a vector field formula
The Navier‐Stokes equations in computational engineering mathematics.
We have learned about the curl for two dimensional vector fields.. By definition, if F = then the two dimensional curl of F is curl F = Nx - My.
LifesaverVector CalculusIntroduction to Plane Algebraic.. CurvesDiv, Grad, Curl, and All thatStudent Solutions.. Manual [for] Vector CalculusVector Calculus, ...
Compute the curl of the vector field F(r, φ, θ) = r eφ + eθ in spherical coordinates.. Solution: Using the formula for the curl in spherical coordinates with ...
In the formulation of Maxwell's equations and the wave equation, some specialized ... In Cartesian coordinates, the curl of a vector field F is defined as.
Calculate the divergence and curl of \$\dlvf = (-y, xy,z)\$.. ... Also, remember that the divergence of a vector field is often a variable quantity and will ...
gradient operator may also be applied to vector fields.. Let F = (F1,F2,F3) = F1i+F2j +F3k, ... Verify that φ = ln(x2 + y2) satisfies the Laplace equation.
1 The curl of the vector field [x2 + y5,z2,x2 + z2] is [−2z,−2x,−5y4]. investment portfolio management excel template
## how to find the curl of a vector field
... f which satisfies the Laplace equation ∆f = 0, like f(x, y) = x3 − 3xy2 ...
qualitatively how the curl of a vector field behaves from a picture.. ... insert a third component equal to 0 in your vector field and use the above formula.
From The Divergence of a Vector Field and The Curl of a Vector Field pages we gave formulas for the ... We can apply the formula above directly to get that:.
The curl of F is ∇×F=|ijk∂∂x∂∂y∂∂zfgh|=⟨∂h∂y−∂g∂z,∂f∂z−∂h∂x,∂g∂x−∂f∂y⟩.. Here are two simple but useful facts about divergence and curl.
I actually never quite worked out the curl formula myself in terms of fancier differential geometry language.. I imagine it's: take a vector field (in R3), turn ...21 คำตอบ · คำตอบยอดนิยม: To me, the explanation for the appearance of div, grad and curl in physical equations is ...
Gradient of Scalar field — Therefore, it is better to convert a vector field to a ... We can also show the above formula in terms of 'nabla' and ...
Lecture 14: Vector field, Divergence and Curl.. • Vector Fields.. # Def (Vector field): A vector field in Rn is a map F : A ⊂ Rn → Rn that assigns.
Even though these algorithms are used widely today in almost all fields of Science, ... Formally, the solution to the BPM equations (whether Full-Vector, ...
12(x² Surface Area Formulas: Capsule Surface Area Volume = π r 2 ((4/3)r + .. Micro Monsters 3d 1080p Glasses
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.