text
stringlengths
100
957k
meta
stringclasses
1 value
# Introduction Using the pretext of numerous encounters with models imported from the deep unsupervising learning field, namely variational autoencoders (World Models [1]), distribution estimators (Diversity Is All You Need [2]), or RealNVP[3] ( which appears in the paper: Latent Space Policy for Hierarchical Reinforcement Learning [4]), it became apparent that a better understanding of such models would be required for more interesting research progress. An additional motivation would be the need for reinforcement learning to be even more autonomous and make better use of its experience, which seemingly requires better representation learning or models, which again, could be said to relate closely with unsupervised learning. Following the CS294-158-SP19 Deep Unsupervised Learning course of the University of Berkeley, I set off to reproduce the Masked Autoencoder for Distribution Estimation (MADE) [5]. While it was advertised as a simple enough algorithm, it might not be necessarily the case, especially for a freshman in the sub-field. Without further delay, let us dive into the content of the paper and the step-by-step process of reproducing the proposed method, as well as fitting it to our specific needs. In unsupervised reinforcement learning, the focus is to recover the distribution of the data we are provided, to use on downstream tasks such as new sample generation, compression, and so on. A simple example would be the autoencoder, which aims at learning a compressed representation $Z \in \mathbb{R}^K$ from a set of observed variables ${X}, X \in \mathbb{R^D}$. In this case, $Z$ serves as an approximation for the original unobserved variable that was used to generate said data $X$. Once we have approximated that latent variable, we can sample from the corresponding latent space to generate new, but related samples, or just use the decoder part to improve the performance in a classification task (?). The following figure provides a more intuitive example of that mechanism. The core idea of MADE builds on top of that concept. It leverages potential relationships that exist among the elements of the input variable $X$ to build a prediction model for each element of said input variable. Indeed, if we consider the red pixel in the flower picture of Figure 2, knowing about all the preceding (blue) pixels can help us estimate its value more effectively (See Figure 2 below). This property is formally referred to as “autoregression” (dependence on itself), and is implemented in MADE by introducing masks for the weights of the neural network that is used to estimate the distribution of the variable’s element. More concretely, this is achieved by masking all the pixels from the red one onward (grey pixels in the corresponding figure). The distribution of an arbitrary pixel thus becomes dependent on a few other pixels. While Figure 2 and the accompanying explanation consider the natural order of the pixel (namely we start from the upper-left corner, to the bottom-right one), MADE loosen this restriction to allow conditioning over and arbitrary order of pixel (or input variables). Intuitively, this can come in handy because we do not actually know what is the real auto-regressive relationship between the input variables. The natural order might not be the best, and to take Figure 2 as an example, predicting the value of the red pixel could be more effective if we conditioned on some of the pixels that come later (the grey ones). In any case, we shall explore how it is achieved in the following section, as well as in the implementation section. # Technical details ## Standard Autoencoder Before actually diving into the method proposed in the paper, let us first quickly review the inner workings of an autoencoder, basing ourselves on a really simplified one illustrated in Figure 3. For simplicity, we thus define the input $X$ of the neural network as a vector of 3 real variables such as: $X = \left( \matrix{x_1 & x_2 & x_3} \right)$. To compute the unique layer, or in this case the latent $H = \left( \matrix{h_1 & h_2 & h_3 & h_4} \right)$, we will define the “input-to-hidden” weight matrix $W$ and the “input bias” vector $B$ as follows: $H = g \left( X \times W + B \right)$ or even more explicitly: $W = \left( \matrix { w_{11} & w_{12} & w_{13} & w_{14} \\ w_{21} & w_{22} & w_{23} & w_{24} \\ w_{31} & w_{32} & w_{33} & w_{34} }\right) \mathrm{and} \enspace B = \left( \matrix{ b_1 & b_2 & b_3 & b_4} \right)$ $H$ is thus obtained as follows: $H = \left(\matrix{h_1 \\ h_2 \\ h_3 \\ h_4} \right) = g( \left( \matrix{ x_1 w_{11} + x_2 w_{21} + x_3 w_{31} + b_1 \\ x_1 w_{12} + x_2 w_{22} + x_3 w_{32} + b_2 \\ x_1 w_{13} + x_2 w_{23} + x_3 w_{33} + b_3 \\ x_1 w_{14} + x_2 w_{24} + x_3 w_{34} + b_4 } \right)) \mathrm{,} \enspace (1)$ where $g$ is an non-linear activation function. It is worth keeping in mind, however, that (a) in general the latent dimension is set to be smaller than the input, otherwise, the neural network can just learn to “copy” the inputs, which defeats the original purpose, and (b) that the column vector $H$ was transposed for better readability. We do this to make the explanation of MADE’s core idea more intuitive later on. (Note: $H$ in Equation (1) is actually not a column vector in this case, but the former representation looked more intuitive.) Then, to map from the latent $H$ to the output vector, in this case, the reconstructed input $\hat{X} = \left( \matrix{ \hat{x}_1 & \hat{x}_2 & \hat{x}_3} \right)$, we also declare the “hidden-to-output” weight matrix $V$ and the corresponding bias bector $C$ as below: $V = \left( \matrix { v_{11} & v_{12} & v_{13} \\ v_{21} & v_{22} & v_{23} \\ v_{31} & v_{32} & v_{33} \\ v_{41} & v_{42} & v_{43} }\right) \mathrm{and} \enspace C = \left( \matrix{ c_1 & c_2 & c_3} \right)$ The output $\hat{X}$ is then explicitly computed as follows: $\hat{X} = \sigma \left( H \times V + C \right)$ which can be further broken down as: $\hat{X} = \left( \matrix{ \hat{x}_1 \\ \hat{x}_2 \\ \hat{x}_3 } \right) = \sigma ( \left( \matrix{ h_1 v_{11} + h_2 v_{21} + h_3 v_{31} + h_4 v_{41} + c_1 \\ h_1 v_{12} + h_2 v_{22} + h_3 v_{32} + h_4 v_{42} + c_2 \\ h_1 v_{13} + h_2 v_{23} + h_3 v_{33} + h_4 v_{43} + c_3} \right)), \enspace (2)$ where $\sigma$ represents the activation of the output layer. When dealing with binary variables, $\sigma$ is effectively understood as the sigmoid function, which squashed whatever raw output is computed between $0$ and $1$. From Equations (1) and (2), we already start seeing how each element of the output $\hat{x}_1$, $\hat{x}_2$, and $\hat{x}_3$ is related to the inputs $x_1$, $x_2$ and $x_3$. Namely, we can express each of one them as a function of the inputs: $\hat{x}_1 = f_1(x_1,x_2,x_3) \\ \hat{x}_2 = f_2(x_1,x_2,x_3) \\ \hat{x}_3 = f_3(x_1,x_2,x_3)$ The goal of MADE is to change the inner workings of the autoencoder neural network so that every element of the output is only dependent on a subset of the original inputs: $\hat{x}_1 = f_1' \\ \hat{x}_2 = f_2'(x_1) \enspace (3) \\ \hat{x}_3 = f_3'(x_1,x_2)$ The relations above assume we are following a natural ordering of the input variables. Additionally, we observe that $\hat{x}_1$ is basically a constant, as it depends only … on the bias weights of the last layer of the network we shall be using. For the sake of completeness, let us quickly review the loss function of the standard autoencoder when the input is formed of binary variables. The objective is to have $\hat{X}$ be as close as possible to the original input $X$, the loss function must materialize the difference between those two. Since the output $\hat{X} = \sigma(H \times V + C)$ is actually the probability of each of its elements being $1$, the Binary Cross Entropy loss function is adequate for our objective. An intuitive explanation of how it works would as follows: let’s assume the original input to be $X = \left( \matrix{ 1 & 0 & 1} \right)$, and two candidate outputs $\hat{X}_1 = \left( \matrix{ 0.9 & 0.15 & 0.95 } \right)$ and $\hat{X}_2 = \left( \matrix{ 0.5 & 0.95 & 0.25 } \right)$. Since each element $\hat{x}_i, i \in {1,2,3}$ of either $\hat{X}_1$ or $\hat{X}_2$ gives us the probability of the $i$-th element of being $1$, it follows that $\hat{X}_1$ describes the original input $X$ more accurately. This result can be formally measured by using the Binary Cross Entropy (BCE) loss function, which we define as follows: $\mathrm{BCELoss}(X,\hat{X}) = - \sum_{i=1}^{\vert X \vert} \left( x_i \times \mathrm{log}(\hat{x}_i) + \left( 1 - x_i \right) \times \mathrm{log}(1 - \hat{x}_i) \right)$ Our previous conjucture is thus objectively justified as follows: $\mathrm{BCELoss}(X, \hat{X}_1) = 0.67 \\ \mathrm{BCELoss}(X, \hat{X}_2) = 2.20$ By design, this BCE loss function decreases when the prediction accuracy increases. Therefore, we apply can apply any gradient descent method to minimize set loss to fit our model (neural network) and obtain the best weight values that achieve our objective. ## Autoregressive Autoencoder Recall that in the simplest case, MADE proposes to predict the value of an arbitrary element $p(x_i)$ of $X$ based on the preceding elements $x_{<i}$. If we consider the joint probability of all the elements of the vector $X$, this property is defined as below: $p(X) = \prod_{i=1}^{\vert X \vert}p(x_i \vert x_{<i}) \mathrm{.}$ This is achieved by masking the weights of the neural network. Still using the simple autoencoder network introduced above, we would have: $H = g \left( X \times \left( W \odot M^W \right)+ B \right)$ and $\hat{X} = \sigma \left( H \times \left( V \odot M^V \right) + C \right)$ To define masks $M^W$ and $M^V$, we first need to decide on (1) an ordering for the inputs and (2) the connectivity of the various weights ($H$) that compose out network to the components not only the inputs $X$, but also the output $\hat{X}$. Regarding the input ordering aspect, we assume the natural ordering, as the bottom layer in Figure 4. For the hidden units, however, the original work proposes sampling from a uniform distribution between $1$ and $\vert X \vert - 1$. As per our example in Figure 4, we will use the ordering $\matrix{ 1 & 2 & 1 & 2}$ to further illustrate how the autoregressive property is achieved. Once the orderings are decided, we can proceed to generate the weights, according to the following two rules: (1) When computing the hidden units based on either the input vector $X$, or a previously hidden layer’s units: letting $k$ be the index of an arbitrary unit $h_k$, and $m(k)$ the connectivity number of said $k$-the element, we go over all the elements $d$ of the ordering (or connectivity number) of the previous layer. For simplicity, let is consider the ordering of the first component $x_1$, which is $d=1$. Further, we consider the first hidden unit $h_1$ (so $k$ = 1) of the hidden layer $H$. If $m(k)$ is greater or equal to an arbitrary $d=1$, this means that we allow the $k-th$ element of the hidden layer to dependent on that $x_1$. Since we are currently considering only $x_1$ and $h_1$, it follows that $m(1) > 1$. This corresponds to letting the weight $w_11$ as per Equation (1) (the $h1$ line) be as it is, which corresponds to a masking value of $1$. Therefore, any downstream operation that will use the value of $h_1$ will have a dependency on the $x_1# element. Now, for the opposite case, consider the element$x_3$, with ordering$3$, while maintening$h_1$. Since$3$is greater than any$m(1)$of that hidden layer (recall that$m(k) \in {1,2}, \forall k$), we want to nullify the weight$w_31$, so that$h_1$has no relation with$x_3$. Therefore, we attribute the value$0$to the corresponding masking element of$M^W$. Such mask generation process is formally defined by the authors as follows: $M_{k,d}^{W} = 1_{m(k) \geq d} = \Bigg\{ \matrix { 1 \enspace \mathrm{if} \enspace m(k) \geq d \\ 0 \enspace \mathrm{otherwise} }$ Considering our simple autoencoder network in Figure 3, we would obtain the mask: $M^W = \left( \matrix { 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 1 \\ 0 & 0 & 0 & 1 }\right)$ and the hidden layer would thus be obtained by with the mask augmented computation$H = g \left( X \times \left( W \odot M^W \right)+ B \right)$, resulting in the hidden layer as speficied below: $H = \left(\matrix{h_1 \\ h_2 \\ h_3 \\ h_4} \right) = g( \left( \matrix{ x_1 w_{11} + b_1 \\ x_1 w_{12} + x_2 w_{22} + b_2 \\ x_1 w_{13} + b_3 \\ x_1 w_{14} + x_2 w_{24} + b_4 } \right)) = \left(\matrix{f^h_1(x_1) \\ f^h_2(x_1,x_2) \\ f^h_3(x_1) \\ f^h_4(x_1, x_2)} \right) \mathrm{.}$ The last column explicitly shows the dependence of each component$h_i$on the inputs. So far so good. (2) Recall that when computing the outputs$\hat{X}$based on the last hidden layer ($H$in this case), we want to make sure that element$\hat{x}_2$only depends on$x_1$, or more explicitly as per Equation (3). To do so, the authors propose to first attribute the input ordering to the output too. Then the following formula is used do generate the mask that will realize the autoregressive property in the output. $M_{d,k}^{V} = 1_{d > m(k)} = \Bigg\{ \matrix { 1 \enspace \mathrm{if} d > m(k) \\ 0 \enspace \mathrm{otherwise} }$ Intuitevely, let us consider the hidden layer’s component$h_2 = f^h_2(x_1,x_2)$, and try to find which output component$\hat{x}_i$should be connected to it. Since$x_1$is supposed to represent the estimate of$p(x_1)$, it must not depend on any of the$x_i$. Therefore, the weight that connects$h_2$to$\hat{x}_1$must be set to$0$.$x_2$, however, estimates$p(x_2\vert x_1)$, therefore it cannot depend on$h_2$, since the latter depends on$x_2$. The corresponding weight must thus be set to$0$again, and the same goes for$h_4$in fact. Since$h_1$and$h_3$only depend on$x_1$, the weights connecting those two to$x_2$can bet set to$1$. Again, for our simple autoencoder’s output weights$V$, we would obtain the mask: $M^V = \left( \matrix { 0 & 1 & 1 \\ 0 & 0 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 }\right)$ and the computation$\hat{X} = \sigma \left( H \times \left( V \odot M^V \right) + C \right)$can be decomposed as follows: $\hat{X} = \left(\matrix{ \hat{x}_1 \\ \hat{x}_2 \\ \hat{x}_3 } \right) = \sigma( \left( \matrix{ c_1 \\ h_1 v_{12} + h_3 v_{32} + c_2 \\ h_1 v_{13} + h_2 v_{23} + h_3 v_{33} + h_4 v_{43} + c_3 } \right)) = \left( \matrix{f^v_1 \\ f^v_2(x_1) \\ f^v_3(x_1,x_2)} \right) \mathrm{,}$ which is exactly the result that is aimed for in Equation (3). Pictorially, this is illustrated as the following figure. Also notice how the$x_3$element is never used for prediction, as it should be. Furthermore, despite$\hat{x}_1$being independent of any input$x_i$, it can still model a distribution by relying on the bias weights$C$of the output layer. Also, while the masking makes the connectivity in the network sparser (especially when compared with Figure 3), the resulting neural network is usually still powerful enough the model the distributions of the output. Worse case, we can always add more layers to make up for that sparsity. Finally, it is important to note that while the example above relies on the natural ordering of the input, in practice, we might benefit by using a different, random ordering for example. This is also taken into account by the authors in the papers, where the also propose to permute the ordering of the inputs, and use it to generate the masks. Similarly, only using a single set of connectivity count for the components of the hidden layers, a different parameterization might give us better results by capturing the autoregressive property more accurately. To address this problem, MADE is trained in a connection agnostic fashion, namely by intermittently generating new connectivity count for the hidden layers during the training. The final model is then evaluated across all those different orderings. Such a method contribute to a more robust model, as the latter has to fit its weights so as to support different configurations of its hidden layers. We shall now introduce a simple implementation for the masked autoencoder in the next section. # Implementation, Experiments and Results We first apply the MADE technique to reconstruct digits from the MNIST dataset, as per the original work, using the Python programming language and the Pytorch Deep Learning framework. (Full code and results) ## 1. Binarized MNIST Digits modeling using MADE Let us first define a few dependencies as follows: # Dependencies import random import numpy as np # Pytorch support import torch as th import torch.nn as nn import torch.optim as optim import torch.nn.functional as F from torchvision import transforms, datasets from torchvision.utils import make_grid import matplotlib.pyplot as plt This makes available the various libraries that will be used to generate or read the data, create the neural networks, and train them, as well as plot the results. Next, we declare a few hyperparameters that will among which the architecture of the neural networks, the orderings of the input, the number of masks to use for the layer masking. # Hyper parameters SEED = 42 N_EPOCHS = 200 LR = 1e-3 BATCH_SIZE = 128 HIDDEN_SIZES = [512,512] LATENT_DIM = HIDDEN_SIZES[-1] INPUT_ORDERINGS = 0 # Input ordering meaning # -1: Generate a new input ordering every time it is needed. Not realistic. # 0: Natural Order # 1: A single input ordering but shuffled # 2..: Does that even make sense to sample multiple ordering throughout training? Only feels like it is gonna confuse the network on what input is what ... CONNECTIVITY_ORDERINGS = 1 # connectivity Orderings # -1: Generate new masks everytime. Unrealistic # 0: Equivalent to vanilla autoencoder # 1..: Use specified amount of randomly generated masks. RESAMPLE_INTERVAL = 20 USE_GPU=True device = th.device( "cuda" if th.cuda.is_available()and USE_GPU else "cpu") # Seeding random.seed(SEED) np.random.seed( SEED) # Pytorch seeding th.manual_seed( SEED) if device.type == "cuda": th.cuda.manual_seed_all(SEED) We can now format the data to facilitate the training loop, now that we have defined the BATCH_SIZE hyperparameter. # Data loading mnist = np.load("binarized_mnist.npz") train, test = mnist['train_data'], mnist['valid_data'] train = th.from_numpy(train).to(device) test = th.from_numpy(test).to(device) trainset = th.utils.data.DataLoader( train, batch_size=BATCH_SIZE, shuffle=True) testset = th.utils.data.DataLoader( test, batch_size=BATCH_SIZE, shuffle=False) Entering the most challenging part, we create the MADE model, by implementing the input ordering as well as the mask generation, as well as the mask application method. # Model INPUT_DIM = 28**2 OUTPUT_DIM = 28**2 # Augmenting the nn.Linear to support masks class MaskedLinear(nn.Linear): def __init__(self, in_features, out_features, bias=True): super().__init__(in_features, out_features, bias) self.register_buffer('mask', th.ones(out_features, in_features)) def set_mask(self, mask): self.mask.data.copy_(th.from_numpy(mask.astype(np.uint8).T)) def forward(self, input): return F.linear(input, self.mask * self.weight, self.bias) class EncoderDecoder(nn.Module): def __init__(self): super().__init__() # Dynamic generating of mask and orderings for efficiency self.connectivity_ordering_seed = 0 self.input_ordering_seed = 0 # Pay attention to the "s" self.connectivity_orderings = CONNECTIVITY_ORDERINGS self.input_orderings = INPUT_ORDERINGS # MLP Layers self.encoder_layers = nn.ModuleList() self.decoder_layers = nn.ModuleList() # Hidden sizes presistent self.encoder_hiddens = HIDDEN_SIZES self.decoder_hiddens = HIDDEN_SIZES[::-1][1:] self.decoder_hiddens.append(INPUT_DIM) # This define d for each component of the input vector self.input_ordering = [ i for i in range(1,INPUT_DIM+1)] # m_0 in the paper current_dim = INPUT_DIM current_input_orders = self.input_ordering # (m_l in the paper) ## Encoder layers for hsize in self.encoder_hiddens: # Linear layer with weights and biases included self.encoder_layers.append( MaskedLinear(current_dim, hsize)) current_dim = hsize ## Decoder layers for hsize in self.decoder_hiddens: # Linear layer with weights and biases included self.decoder_layers.append( MaskedLinear(current_dim, hsize)) current_dim = hsize ## Input ordering: if only using one random shuffling, do it here, # then skip it in the shuffling function for efficiency if self.input_orderings == 1: np.random.shuffle(self.input_ordering) # Let's us know if the mask was generated at least once already, for efficiency when using a single connectivity ordering self.mask_generated = False self.generate_masks() # This is left for reference. Not actually used. # The idea is to also shuffle the input ordering as training goes along, but intuitively, it seems too unstable def shuffle_input_ordering(self): # Totally random input ordeing generation. Quite unstable if self.input_orderings == -1: random.shuffle(self.input_ordering) if self.input_orderings in [0,1]: pass # Using natural order elif self.input_orderings > 1: rng = np.random.RandomState(self.input_ordering_seed) self.input_ordering = rng.randint(1, input_dim+1, input_dim) self.input_ordering_seed = (self.input_ordering_seed + 1) % self.input_orderings else: raise NotImplementedError def generate_masks(self): if self.mask_generated and self.connectivity_orderings == 1: return # Skips mask generation for efficiciency if self.connectivity_orderings == -1: raise NotImplementedError elif self.connectivity_orderings >=1: # Inspired from Andrej Karpathy's implementation. Props where due. rng = np.random.RandomState(self.connectivity_ordering_seed) self.connectivity_ordering_seed = (self.connectivity_ordering_seed + 1) % self.connectivity_orderings else: raise NotImplementedError current_dim = INPUT_DIM current_input_orders = self.input_ordering # (m_l in the paper) layer_connect_counts = [] for layer_idx, hsize in enumerate(self.encoder_hiddens): # Fix #1: Make sure we do not sample connectivity count that are not in the previous layer, # since they won't be useful, and reduce the number of weights that are actually used for the esitmation layer_count_low = 1 if len(layer_connect_counts) == 0 else np.min(layer_connect_counts) layer_connect_counts = [rng.randint(low=layer_count_low,high=INPUT_DIM) for _ in range(hsize)] # Generating a mask for each layer and storing it mask = [[ 1 if layer_connect_counts[k] >= d else 0 for k in range(hsize)] for d in current_input_orders] self.encoder_layers[layer_idx].set_mask(np.array(mask)) current_dim = hsize current_input_orders = layer_connect_counts # Decoder subsection for layer_idx, hsize in enumerate(self.decoder_hiddens): if layer_idx == len(self.decoder_hiddens)-1: layer_connect_counts = self.input_ordering mask = [[ 1 if layer_connect_counts[k] > d else 0 for k in range(hsize)] for d in current_input_orders] else: # Note: In case we did not reach the last layer, we still generate the mask as for the # encoder's layer ! That was the problem we were having ... layer_count_low = 1 if len(layer_connect_counts) == 0 else np.min(layer_connect_counts) layer_connect_counts = [rng.randint(low=layer_count_low,high=INPUT_DIM) for _ in range(hsize)] # Generating a mask for each layer and storing it mask = [[ 1 if layer_connect_counts[k] >= d else 0 for k in range(hsize)] for d in current_input_orders] self.decoder_layers[layer_idx].set_mask(np.array(mask)) current_dim = hsize current_input_orders = layer_connect_counts self.mask_generated = True def encode(self,x): for layer in self.encoder_layers: x = F.relu(layer(x)) return x def decode(self,z): for layer in self.decoder_layers[:-1]: z = F.relu(layer(z)) z = self.decoder_layers[-1](z) return th.sigmoid(z) def forward(self,x): # Full pass return self.decode(self.encode(x)) encdec = EncoderDecoder().to(device) print( encdec) optimizer = optim.Adam(list(encdec.parameters()), lr=LR) With the default hyperparameters, we get the following MADE model: EncoderDecoder( (encoder_layers): ModuleList( (0): MaskedLinear(in_features=784, out_features=512, bias=True) (1): MaskedLinear(in_features=512, out_features=512, bias=True) ) (decoder_layers): ModuleList( (0): MaskedLinear(in_features=512, out_features=512, bias=True) (1): MaskedLinear(in_features=512, out_features=784, bias=True) ) ) We can now proceed to the training loop of the model: # Helper to compute loss function def compute_loss(model, data): x_pred = model(data) loss = F.binary_cross_entropy(x_pred, data) return loss def compute_test_loss(model, testset): loss = 0. for test_batch_idx, test_batch in enumerate(testset): loss += compute_loss(model, test_batch.to(device)) loss /= (test_batch_idx+1) return loss # Training loop # Holders for logging statistics. Maybe disregarded at first batch_iter = 0 # Used as a reference to regenerate masks mb_train_losses = [] test_losses = [] test_loss = compute_test_loss( encdec, testset) test_losses.append( test_loss.item()) # Note. Used to correct the sampling order especially when the input ordering is not natural IDX_TO_ORDERING = {} for idx in range(INPUT_DIM): for comp_idx, comp in enumerate(encdec.input_ordering): if comp == idx+1: IDX_TO_ORDERING[idx] = comp_idx break # for efficiency for epoch in range( N_EPOCHS): # Iterate over minibatches for mb_idx, mb_train_data in enumerate(trainset): # Train the MADE model mb_train_loss = compute_loss(encdec, mb_train_data.to(device)) optimizer.zero_grad() mb_train_loss.backward() optimizer.step() mb_train_losses.append(mb_train_loss.item()) batch_iter += 1 if batch_iter % RESAMPLE_INTERVAL == 0: encdec.generate_masks() # Logging stats: Compute the test / validation loss over the full test batch only at the epoch's end. test_loss = compute_test_loss( encdec, testset) test_losses.append( test_loss.item()) print( "Epoch %d (Last MB Loss)" % (epoch)) print( "\t Train Loss : %.4f , Test Loss: %.4f" %( mb_train_loss, test_loss)) # Plotting. if (epoch > 0 and epoch % 1 == 0) or epoch == (N_EPOCHS -1): fig, axes = plt.subplots(1, 2,figsize=(16,8)) x_mbs_train = np.arange(len(mb_train_losses)) x_mbs_test = np.arange(0, len(mb_train_losses), mb_idx) # Test loss at the end of each epoch needs to account for gap in minibatch # Ploting train and test losses axes[0].plot(x_mbs_train, mb_train_losses,label="MADE Train MB Loss") axes[0].plot(x_mbs_test, test_losses,label="MADE Test Loss") axes[0].set_xlabel("Minibatches") axes[0].set_ylabel("BCE Loss") axes[0].set_title("Train and Test losses") axes[0].legend() axes[0].grid(True) # Sampling from the MADE model ## Plot parameterization N_EVAL_SAMPLES = 100 N_ROW = int(math.sqrt(N_EVAL_SAMPLES)) N_ROW += N_EVAL_SAMPLES % N_ROW # In case there are some left over, add an additional row with th.no_grad(): final_sample = th.rand([N_EVAL_SAMPLES, INPUT_DIM]).to(device) # Note that we sample in Parallel ! which is much faster. That is why we use the [:, corrected_idx] # This restricts it to the using a single mask, however. for sampled_idx in range(INPUT_DIM): corrected_idx = IDX_TO_ORDERING[sampled_idx] reconstructed = encdec(final_sample) reconstructed = th.bernoulli(reconstructed) # Properly discretized the output to get concrete results final_sample[:, corrected_idx] = reconstructed[:, corrected_idx] # Some reshaping and prettifying final_sample = final_sample.view([N_EVAL_SAMPLES,1,28,28]) final_sample = make_grid(final_sample, nrow=N_ROW).cpu().numpy() final_sample = np.moveaxis(final_sample, (0,1,2), (1,2,0)) final_sample = np.moveaxis(final_sample, (0,1,2), (1,2,0)) axes[1].imshow(final_sample) fig.tight_layout() plt.show() The results of training documented below, with a training loss that converges to around 88.0 (negative log-likelihood), and sampled digits similar to the original work. The full source code, as well as the results, can be founded and interacted within the following Collab Notebook. Some implementation notes • When computing the test loss at first, it was done using the whole$10000$samples at once. While the results were correct, it created a (GPU) memory leak that disallowed training for too many epochs, namely by causing an Out Of Memory memory when using a GPU, and grind the system down to an irresponsive state when using the system memory combined with CPU computation (No GPU). The choice was thus made to also evaluate using small batch at a time for the testing too. • Sampling from the MADE model follows a really special procedure: since each element of the output is constrained between$0$and$1$, as it is a probability, just plotting the resulting vector as one would do when using a standard VAE would result the same blurry picture. reconstructed = encdec(final_sample) reconstructed = th.bernoulli(reconstructed) # Properly discretized the output to get concrete results To obtain an actual picture of a digit, the output of the model has to be fed to a Bernoulli distribution, from which we draw samples, as per the following code extracted from above. This is probably why the dataset is characterized as binarized MNIST: the actual values of each pixel, be it in the data or in the correctly sampled output are either 0 or 1, disallowing any shade of grey. • When using a * non-natural ordering* for the inputs, the sampling process has to be adapted according to so that the correct element is sampled as we iterate over each component of the output vector. In this case, this was achieved by building a mapping from component index to the ordering as per: IDX_TO_ORDERING = {} for idx in range(INPUT_DIM): for comp_idx, comp in enumerate(encdec.input_ordering): if comp == idx+1: IDX_TO_ORDERING[idx] = comp_idx break # for efficiency then using that map to access the data at the index we are currently sampling as in the following section of code extracted from above. for sampled_idx in range(INPUT_DIM): corrected_idx = IDX_TO_ORDERING[sampled_idx] reconstructed = encdec(final_sample) reconstructed = th.bernoulli(reconstructed) # Properly discretized the output to get concrete results final_sample[:, corrected_idx] = reconstructed[:, corrected_idx] ## 2. Two Dimensional data modeling with MADE using categorical distribution Next, we apply the MADE model to some data given in Exercise 2 of the Spring 2019 Deep Unsupervised Learning Week 1 Homework course of the University of Berkeley. The goal is to use the autoregressive property to model the distribution of the provided data, which is represented as a two-dimensional vector. Each component of the vector, however, takes a discrete value between$0$and$199$. Therefore, we adapt the MADE to model a categorical distribution for each component of said vector, while retaining the autoregressive property. Furthermore, we also experiment with various format of the input, namely (a) left as is, (b) normalized input, and finally (c) one-hot vectors for each$x_1$and$x_2$. Using (c) as the representative case, we review the parts of the model presented in the previous MNIST case. The source code for the latter case is available as the following Google Colab Notebook. First, the declaration of the model input-to-hidden and hidden-to-output layers have to be revised according to the new output dimension, and take into account the fact that the output for each component$x_i$becomes their respective probability distributions over the range of values$0$to$199$. An especially important part is how to make sure the mask input ordering is correctly extended to match the one-hot vector input. Fortunately, once it is done, it also takes care of the ordering of the extended output vector that is used to model the respective distribution of the$\hat{x}_i$components. As a practival example, letting$X = \left( \matrix {x_1 & x_2} \right) = \left( \matrix{144 & 12} \right)$, and the correspponding natural input ordering$\left( \matrix{1 & 2} \right)$. Extended to a one hot vector:$X^{hot} = \left( [ \matrix {0 & 0 & 0 & \mathrm{…} & 1 & \mathrm{…} & 0} ] , [\matrix{0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \mathrm{…} & 0}] \right)$. Concatenating this gives us a vector of dimension$200 \times 2 = 400$, and the corresponding input ordering would thus be:$\left( \matrix{ 1 & 1 & 1 & \mathrm{…} & 1 & 2 & 2 & 2 & \mathrm{…} & 2}\right)$(the first 200 are elements are 1, while the remaining 200 are 2). Pictorially, we get the model represented in the figure below: The implementation introduced above for the MNIST example is modified to fit this problem. The updated part is presented in the simplified code section below. # Model definition INPUT_DIM = 2 N_CLASSES = 200 # Used to tranform the inputs into one-hot vectors OUTPUT_DIM = INPUT_DIM * N_CLASSES # We aim at having the prob dist for each x1 and x2 component... class MaskedLinear(nn.Linear): # ... same as MNIST ... class EncoderDecoder(nn.Module): def __init__(self): super().__init__() # ... same as MNIST ... # This define d for each component of the input vector # Since we are using the one hot input format, we need to adjust the layer connectivity ordering self.input_ordering = [] for i in range(1,INPUT_DIM+1): for _ in range(N_CLASSES): self.input_ordering.append(i) current_dim = INPUT_DIM * N_CLASSES current_input_orders = self.input_ordering # (m_l in the paper) # ... same as MNIST ... self.generate_masks() def shuffle_input_ordering(self): # ... same as MNIST ... def generate_masks(self): # ... same as MNIST ... # Account for the dimensional change of the input due to the one-hot input. # The rest follows naturally when generating the masks current_dim = INPUT_DIM * N_CLASSES current_input_orders = self.input_ordering # (m_l in the paper) # ... same as MNIST ... def encode(self,x): # ... same as MNIST ... def decode(self,z): for layer in self.decoder_layers[:-1]: z = F.relu(layer(z)) z = self.decoder_layers[-1](z) # Separate the logits of the distribution for x_1 and x_2, so as to construct their respective probability distribution # The masking is already taking care of by generating the "self.input_ordering" as in the __init__() method above. x1_logits = z[:, :N_CLASSES] x2_logits = z[:, N_CLASSES:] return x1_logits, x2_logits def forward(self,x): # ... same as MNIST ... # Instanciating the model encdec = EncoderDecoder().to(device) print(encdec) # DEBUG # Optimizer optimiza = optim.Adam(list(encdec.parameters()), lr=LR) Since we are now dealing with multiple classes, we also need to adjust the loss computation to the cross-entropy instead. We must also make sure to properly separate the respective distributions of$x_1$and$x_2$, as well as their label data when computing said loss. Pragmatically, this is achieved as demonstrated in the code section below. # Helper to compute loss function def compute_loss(model, data): x1s = data[:,0] x2s = data[:,1] # One hot and concatenante x1s_hot = F.one_hot(x1s, N_CLASSES).float() x2s_hot = F.one_hot(x2s, N_CLASSES).float() data = th.cat([x1s_hot,x2s_hot], 1) z = model.encode(data) x1_logits, x2_logits = model.decode(z) x1_loss = F.cross_entropy(x1_logits, x1s) x2_loss = F.cross_entropy(x2_logits, x2s) loss = x1_loss + x2_loss return loss, z.detach() Finally, we can proceed to the actual training of the network. While the optimization phase is basically the same as the MNIST case once we have adapted the model structure and the loss, the sampling from the MADE model has to also be changed to take into account the categorical distributions of the input variables$x_1$and$x_2$. # Training loop mb_train_losses = [] test_losses = [] test_loss, _ = compute_loss( encdec, test_batch) test_losses.append( test_loss.item()) # for epoch in range( n_epochs): for epoch in range( N_EPOCHS): # ... same as MNIST ... print( "Epoch %d (Last MB Loss)" % (epoch)) print( "\t Train Loss : %.4f , Test Loss: %.4f" %( mb_train_loss, test_loss)) if (epoch > 0 and epoch % 1 == 0) or epoch == (N_EPOCHS -1): fig, axes = plt.subplots(1, 3,figsize=(24,8)) x_mbs_train = np.arange(len(mb_train_losses)) x_mbs_test = np.arange(0, len(mb_train_losses), mb_idx) # Test loss at the end of each epoch needs to account for gap in minibatch # Ploting train and test losses axes[0].plot(x_mbs_train, mb_train_losses,label="MADE Train MB Loss") axes[0].plot(x_mbs_test, test_losses,label="MADE Test Loss") axes[0].set_xlabel("Minibatches") axes[0].set_ylabel("BCE Loss") axes[0].set_title("Train and Test losses") axes[0].legend() # Plotting sampled points and original on the left axes[1].hist2d( full_data[:,0], full_data[:,1], bins=(200,200), cmap='gist_gray') axes[1].set_title('Original') # Sampling from the MADE and reconstructing the data with th.no_grad(): ## Get P_\theta(x_1) and sample from it dummy_input = th.zeros([1, INPUT_DIM * N_CLASSES]).to(device) x1s_pred, _ = encdec(dummy_input) x1_estimate_samples = Categorical(logits=x1s_pred).sample([len(full_data)]).squeeze() x1_estimate_samples_hot = F.one_hot(x1_estimate_samples, N_CLASSES).float() x1_estimate_samples_hot = th.cat([x1_estimate_samples_hot, th.zeros([x1_estimate_samples_hot.shape[0], N_CLASSES]).to(device)], 1) # Adds the expected x2 one hot # Get P(x_2|x_1) and sample the corresponding x2 to comple the pairs _, x2s_pred = encdec(x1_estimate_samples_hot) x2_dist = Categorical(logits=x2s_pred) x2_estimate_samples = x2_dist.sample() # Dimension deduced from x1s estiamted sample shape ! So nice of pytorch # Type fix x1_estimate_samples = x1_estimate_samples.cpu().numpy() x2_estimate_samples = x2_estimate_samples.cpu().numpy() axes[2].hist2d( x1_estimate_samples, x2_estimate_samples, bins=(200,200), cmap='gist_gray') axes[2].set_title('Reconstructed (Full data)') axes[2].set_xlim(0, 199) axes[2].set_ylim(0, 199) fig.tight_layout() plt.show() # Shuffle the data in a K-folding style, but without the K. if SHUFFLE_DATA: trainset, test_batch = shuffle_dataset() After training for just 20 epochs, the model can already reconstruct (right plot) the original data (middle plot) quite well enough. The loss however, while still quite high in variance, exhibits a slight decrease. Using MADE definitely gets us better results than when using a naive non-autoregressive model of the form$\tilde{p}(x_1, x_2) = \tilde{p}(x_1)\tilde{p}(x_2 \vert x_1)$, as illustrated in the Figure 8 below. Some additional remarks • The case (c) where the input are encoded as a one-hot vector was selected as the representative case because the other cases, namely (a) input left as is (values of$x_1$and$x_2$ranging froom$0$to$199$), and (b) normalized input, performed comparatively worse. Interestingly, the normalized input version (Figure 10) struggled to reconstruct the provided dataset even more than the one without normalization (Figure 9). For the sake of completeness, the complete source code files are provided in the form of a Google Colab Notebook for playing around with the various parameterization. # Discussion and Conclusion This concludes our dive into the MADE’s inner workings, as well as experiments conducted on the MNIST digits dataset with continuous outputs, and a custom dataset with two-dimensional input data with discrete outputs. In both case, MADE can be used to generate new samples while leveraging the autoregressive property between elements of the input. Some additional remarks • Realizing the autoregressive property using the masking method proposed in method is likely making some weights useless, since they always end up being multiply by$0$and not contributing to the gradient. For small enough inputs, standard size neural networks( 2 layes of 512 units for example) seems to work well enough thanks to the universal approximation property of neural networks. For higher dimensional inputs, increasing the depth of the network, as well as the width of the layers should also increase the number of weights actually used for the computations, thereby increasing the expressiveness of the models. • Following the MNIST experiment, it is important to notice that when the width of the layers are lesser than the dimension of the input vector, some input are likely to not be used during the autoregression, therby losing some potentially important relationships that exist in the input. Similarly, it is not clear if sampling the connectivity counts of the hidden layers using a uniform distribution actually gives us a good autoregression with respect to the given dataset. For example, the output image might be highly dependent on the say the range$[200,400]\$ in the case of the MNIST digit dataset (somewhere around the center of the picture). Therefore, having more values sampled from that range could create a more usefull autoregression and lead to better results when generating new samples. This could be achieved by using a different distribution to generate the connectivity counts for those hidden layers, or applying some constraint based on some prior knowledge. This would however be quite time and resource consuming, thus using the uniform distriution is a general and “fair” method that works pretty well. Using it should not be that bad after all. (As they say: “If it ain’t broken, don’t fix it”). • Personnaly, this took more time to undestand and implement than I would like to admit. The assignments were expected to be delivered in 1 or 2 weeks, but just this MADE exercise took me months, so that would have been a definite fail. Nevertheless, lesson learned. The data for the models as well as the source code can be accessed in the following Google Drive folder, or GitHub repository. # Acknowledgments • The original authors of the MADE paper for publish their works, as well as their source code, albeit a little bit cryptic for me. • The enlightning explanation provided by the instructors of the CS294-158-SP19 Deep Unsupervised Learning course, especially how to sample from a MADE model with with categorical distributions as outputs. • Sir Karpathy’s PyTorch implementation of MADE, which was tremendously useful as reference and for sanity checks. • Google Collab for the (partial) computational resources. Tags: Updated:
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Predicting effectiveness of countermeasures during the COVID-19 outbreak in South Africa using agent-based simulation ## Abstract COVID-19 has spread rapidly around the globe. While there has been a slow down of the spread in some countries, e.g., in China, the African continent is still at the beginning of a potentially wide spread of the virus. Owing to its economic strength and imbalances, South Africa is of particular relevance with regard to the drastic measures to prevent the spread of this novel coronavirus. In March 2020, South Africa imposed one of the most severe lockdowns worldwide and subsequently faced the number of infections slowing down considerably. In May 2020, this lockdown was partially relaxed and further easing of restrictions was envisaged. In July and August 2020, daily new infections peaked and declined subsequently. Lockdown measures were further relaxed. This study aims to assess the recent and upcoming measures from an epidemiological perspective. Agent-based epidemic simulations are used to depict the effects of policy measures on the further course of this epidemic. The results indicate that measures that are either lifted too early or are too lenient have no sufficient mitigating effects on infection rates. Consequently, continuous exponential infection growth rates or a second significant peak of infected people occur. These outcomes are likely to cause higher mortality rates once healthcare capacities are occupied and no longer capable to treat all severely and critically infected COVID-19 patients. In contrast, strict measures appear to be a suitable way to contain the virus. The simulations imply that the initial lockdown of 27 March 2020 was probably sufficient to slow the growth in the number of infections, but relaxing countermeasures might allow for a second severe outbreak of COVID-19 in our investigated simulation region of Nelson Mandela Bay Municipality. ## Introduction The continued absence of approved vaccinations and medical treatments in the current COVID-19 epidemic (Greenstone and Nigam, 2020) causes broad uncertainty on adequate non-medical countermeasures and policy responses. Since droplet infections are currently assumed to be the main transmission path of the virus (Gengler et al., 2020), social distancing measures and lockdowns established as the main tools to mitigate infection numbers. Owing to exponential growth rates of COVID-19 infections, both the timing and extent of such measures are considered essential (Scarselli et al., 2021). The impact of measures on the transmission dynamics is dependent on their respective characteristics (Chinazzi et al., 2020; Dignum et al., 2020; Martin-Calvo et al., 2020; Rocklov et al., 2020; Sugishita et al., 2020; Vrugt et al., 2020), thus policymakers’ decisions must pinpoint the specific situation in order to mitigate the utilisation of healthcare capacities efficiently and balance social, economic and epidemiological issues. The United States are one example of the impact of delayed countermeasures, with lockdowns and other measures having been implemented slowly. The total cases surged from 1678 to 307,318 between 16 March 2020 and 6 April 2020 and range above 5,800,000 at the time of writing (27 August 2020) (Johns Hopkins Coronavirus Resource Center, 2020; World Health Organisation, 2020a, c). Trade-offs between effective countermeasures and economic activity are known (Dignum et al., 2020; Silva et al., 2020) and of particular interest in developing and transition countries as they come along with limited economic resilience, medical treatment abilities, testing capacities and options to practice social distancing. In comparison to most countries of the Global North, the infection dynamics in South Africa and other Sub-Saharan African countries are likely to be additionally driven by factors such as a comparably young population, the distinct prevalence of other infectious diseases such as tuberculosis or HIV and the (spatial) distribution of citizens and economic wealth (Ataguba, 2020; Bannon and Collier, 2003; Davids et al., 2020; Phua et al., 2020; World Health Organisation, 2019, 2020c). Thus, governments of the Global South are forced to react under even greater pressure and uncertainty than in countries with larger testing and healthcare capacities and are nevertheless confronted with the need to anticipate the impact of policy measures. South Africa is a transition country with one of the highest case numbers is the world, but currently faces a sudden drop in new case numbers. From mid-July to mid-September, the number of reported active cases in South Africa declined from ~170,000 to 50,000, which led the South African President Cyril Ramaphosa to announce that his country has presumably passed the peak of infections. Several lockdown measures were eased or ended under consideration of economic and social concerns (Al Jazeera, 2020). A presentation of interim results of an unpublished seroprevalence study for several districts in Cape Town on 2 September 2020 cautiously supports optimistic assumptions about future case numbers in South Africa (Bloomberg, 2020, Hsiao, 2020). However, since reliable data, especially on unreported cases in South Africa, are scarce, the present analysis aims to contribute to the on-going debate on appropriate countermeasures and the current epidemiological status in South Africa. The impact of different lockdown characteristics and timing on further infection dynamics is simulated and discussed. A particular focus of the analysis is on the possible consequences of fading out measures too early. The South African Nelson Mandela Bay Municipality (NMBM) is selected as a research area, as it is an example of urban areas in the Global South (see Fig. 1). From a spatial perspective, South African cities are characterised by large socioeconomic disparities and segregation (Davids et al., 2020). About 35 Million people live below the upper poverty line and the unemployment rate is about 30%. The country is characterised by few agglomerations. Sustained and harsh lockdown measures, such as those initially implemented by the South African government in the beginning of the epidemic, can thus lead to many people having to fight for their economic survival (Department Statistics South Africa, 2019a, b). Comparatively limited capacities of the health system may result in a lack of intensive medical care throughout the country (Bossert et al., 2021; Davids et al., 2020; Schröder et al., 2021), particularly in the poor and densely populated townships or the remote rural areas. The impact of the high prevalence of potential comorbidities (e.g., HIV, tuberculosis) has not yet been conclusively assessed, but is suspected to be detrimental (Hogan et al., 2020). The first cases of COVID-19 in South Africa were reported in early March 2020, with the total number of reported cases and deaths rising to 1353 and 27, respectively. By mid-June, cumulative infections exceeded 100,000, with about 2000 deaths. The previous peak of new daily cases was reached in mid-July, with about 14,000 new cases reported daily. Since then, the number of daily new cases has decreased significantly (Johns Hopkins Coronavirus Resource Center, 2020; World Health Organisation, 2020b). It remains uncertain whether current policy measures or other effects such as increased (but unobserved) immunity, weather changes or habituated behaviour of the population are responsible for the decline, to name but a few. In April 2020, the government has published a catalogue of measures (South African Department of Health, 2020) divided into five general lockdown levels, with level 5 corresponding to the disaster response measures as applied from 26 March to 30 April. A less stringent Level 4 lockdown was then imposed until the end of May, followed by several variations of level 3 measures, with an increasing number of activities being allowed. Since 17 August, the restrictions have been set to level 2 (Davids et al., 2020; Wikipedia, 2020). As a novel approach, transportation simulations are applied to mimic human social interactions that are required for COVID-19 transmission (Bontempi et al., 2020; Squazzoni et al., 2020). Reliable data on social interactions at the micro-level are scarce (Bontempi et al., 2020). Agent-based models (ABMs) are thus currently becoming popular as they are able to bridge this gap based on sociological or behavioural economic theories and assumptions. Using demographic as well as network and location data, human behaviour and the resulting social contacts can be simulated to a certain extent and contact-driven infection events can be computed from known infection parameters such as infectivity (Davids et al., 2020; Dignum et al., 2020; Gomez et al., 2020; Hackl and Dubernet, 2019; Muller et al., 2020). Compared to purely mathematical epidemiological modelling, which became famous in the twentieth century and is still widely used (Hethcote, 2000; Kermack et al., 1927; Remuzzi and Remuzzi, 2020; Squazzoni et al., 2020), agent-based approaches are often better suited to account for spatial effects, inhomogeneous information and the stochastic character of biological systems such as human social interactions (Shi et al., 2014). Anthropocentric ABMs can track such human contact events within a realistic synthetic population in a simulated area throughout the day (Horni et al., 2016). To overcome habitual mono-disciplinarity in scientific attempts to understand current epidemics (Bontempi et al., 2020; Squazzoni et al., 2020), ABMs are generally able to integrate a variety of approaches. The present analysis is based on transportation science, behavioural economics and epidemiological models to simulate an epidemic from spatial and demographic data and corresponding assumptions. As a novel approach, the multi-agent transport simulation framework MATSim (Horni et al., 2016) is used to simulate the movements of a synthetic population in a synthetic network of links and nodes on a common day. The network data is taken from OpenStreetMap (OpenStreetMap contributors, 2017) and consists of linked streets and locations of certain categories such as shops, workplaces or schools. A synthetic population, previously created from South African Census data (Joubert, 2018) is used to populate the network and perform different activities during the day, such as working, having leisure time or spending time at home. In the simulation, all activities are connected by means of transport, e.g., cars, minibus taxis or walking. The MATSim-related epidemic simulation framework Episim (VSP TU Berlin, 2020) is used to track the movements of these agents in the network. All encounters of infected and uninfected agents are identified as potential infection events and an infection then occurs based on a stochastic process. In a next step, the infected agents pass through an extended SIR model, that is parameterised based on the preliminary epidemiological findings on COVID-19. SIR models exist with various extensions and are among the most common models for analysing epidemics (Anderson and May, 2010; Keeling and Rohani, 2008; Kermack et al., 1927; Roche et al., 2011). In Episim, the agents’ state of health goes through different stages, in short, the agents are either susceptible, infected or recovered. The present analysis shall contribute to the debate on appropriate policy interventions in South Africa. A particular focus is set on the duration of the measures and their impact on the spread of COVID-19. It has recently been suggested that the intensive care units in South Africa may not be sufficient to cope with the current situation (Bossert et al., 2021; Schroder et al., 2021). At the same time, the current status of the COVID-19 pandemic in South Africa has not yet been definitively determined. Accordingly, the timing of termination or reinforcement of interventions is of particular importance to successfully contain the spread of the virus. The results imply that if measures are terminated too early, the healthcare system may be overburdened again. A second purpose of this case study is to further promote the potential of agent-based transportation simulation as a viable approach to bridging the gap between insufficient micro-data on social contact and the impact of policies on human behaviour. ## Methods The analysis consists of two main steps. First, MATSim-based transportation simulations are used to compose realistic movement and activity profiles of a synthetic population in a network of streets and locations. Second, Episim-based epidemic simulations identify potential infection events and compute the resulting disease progression based on the MATSim output. The corresponding code including underlying datasets can be downloaded from https://www.comses.net/codebases/d4e5bb89-973d-486b-ab46-26781306ffc9/releases/1.0.0/. ### Transportation simulation The underpinning transportation simulations are conducted with MATSim version 12.0-2019w48-SNAPSHOT (Horni et al., 2016) and are based on a 10% population sample for NMBM derived from the 2011 Census and 2004 travel survey data (Joubert, 2018). The main descriptive statistics of the synthetic population are presented in Table 1. The population comprises a total of 114,346 agents living in 32,597 households (3.51 agents per household). The age pyramid shown in Fig. 2 and the average age of the synthetic population of 30.27 years show a population structure with higher proportions among the young population groups. The agents move within a virtual network generated from OpenStreetMap data (OpenStreetMap contributors, 2017) (see Table 1 for network statistics). In the course of a simulation run, each agent performs one or more activities, with a run being virtual 108,000 seconds or 30 h long, starting at midnight and ending at 6 a.m. the following day. The activities are divided into the following groups: “home”, “work”, “educ_primary”, “educ_higher”, “shopping”, “leisure” and “other”. Table 2 shows the absolute numbers and Fig. 3 the spatial concentration of activities. It can be seen that relatively few activities related to work are carried out. This is due to the process of population synthesis, where all activities not clearly identified as primary labour activities fall under the category of “other”. The infectivity parameter for “other” is set to a comparably low value, in order not to overestimate the impact of this mixed indicator. The simulation has a total of six travel modes that allow agents to travel between activities. The so-called teleportation modes such as pedestrian, bicycle and passenger allow the agents to travel at a predefined speed along a direct air line to their destination. Furthermore, the so-called network modes allow agents to travel with vehicles through the virtual network. These include cars, but also formal public transport in the form of public bus and train, and informal public transport as a minibus taxi. Currently, all formal bus operations in NMBM have been suspended (Algoa Bus Company, 2020). It can therefore be assumed that customers are substituting regular bus services with minibus taxis. The MATSim population file is modified accordingly. Minibus taxis are the backbone of the region’s transport system, while motorised individual transport is of less importance. Public minibus taxis are expected to play an important role in the epidemic simulation, as both the probability and intensity of contact are assumed to be high. In addition, the relatively crowded vehicles mix people from different places of work and residence. In 2014, there were 2374 minibus taxis in NMBM with an average capacity of 15 passengers (Neumann et al., 2015). In order to represent minibus transport, a Demand Responsive Transport (DRT) framework is implemented into the model (Bischoff et al., 2017). The reduced population sample usually requires adjustments to network and vehicle capacities to match the real scenario load. However, in the following scenario, this would result in minibus taxi capacities of 1.5 seats, which is problematic for the epidemic simulation. Furthermore, due to a lack of sufficient data on stops and timetables the DRT module cannot be set up as a stop-based service, which would be more similar to the operating schemes of minibus taxis. Instead, DRT vehicles operate as a door-to-door system and thus require adjustments in number of vehicles and passenger capacity to achieve similar trip distances and contact times. Finally, rounding errors can occur with this set-up, as 1 agent represents 10 real people. To address these concerns, after several calibration runs, a model with 2374 DRT vehicles with a capacity of 15 seats is chosen based on occupancy rate, travel distances and the rejection rate (see Table 1 for DRT statistics). The different modules of the simulation such as agents, the road network, the locations of activities or the available transportation modes can be further specified and adapted to specific real-world cases. In this way, analyses of different disciplines can be conducted, e.g. for evaluating the efficiency of transport systems or changes in people’s behaviour as a result of interventions. ### Epidemic simulation The Episim framework (VSP TU Berlin, 2020) builds on the output of a preceding MATSim and is designed to detect possible infection events and corresponding infection chains. MATSim simulates the trajectories of agents as they perform their activities and travel within the network. The underpinning OSM-based network file contains information on the types and locations of the different activities. These static locations are complemented by the means of transport, i.e., the specific vehicles, derived from the MATSim-generated plans file. The same file contains information about the timetables of the individual agents. By merging these information, it is possible to identify potential infection events. Episim uses the activity and transportation locations as individual containers. Only agents, that sojourn in the same container at the same time, can infect each other with a certain probability. Moreover, only infectious agents can infect other agents and only susceptible agents can be infected. #### Disease progression Once the agents are infected, they go through different stages of an extended SIR model (Kermack et al., 1927) (see Fig. 4). An exposed state and the option to self-quarantine extend the model to an SEIQR model. Subsequent to an infection, an initially healthy and thus susceptible agent evolves from susceptible to exposed to infectious. Owing to the incubation period, it is assumed that agents are not infectious before day 4 after infection (Anderson et al., 2020; Lauer et al., 2020; Rocklov et al., 2020). From this stage on, a stochastic process determines whether the infected agents are asymptomatic and recover without complications or show mild symptoms or become seriously sick. The probability of an asymptomatic infection was set to 80%, which refers to the upper bound of the findings of current studies, in order not to overestimate the number of infections according to the conservative approach of the analysis (Birhanu et al., 2020). In the case of symptomatic infection, infected agents are assumed to self-quarantine, as the dangerousness and infectivity of COVID-19 are widely known. The self-quarantine lasts 14 days and is assumed to block all social contact, even within the household. Agents who went into self-quarantine remain technically mobile in the simulation. However, they are neither contagious nor susceptible and thus cannot participate in infection events. About 4.5% of all infected agents become seriously sick ten days after infection. Of these, a share of 25% becomes critical the following day. Infectious agents recover 16 days after infection and patients with severe disease after 23 days (Muller et al., 2020; Silal et al., 2020; Verity et al., 2020). The distinction between seriously sick and critical agents allows to check the utilisation of the healthcare system. Seriously sick agents require basic hospital care and may become critical, i.e., they require respiratory support and thus intensive care. At this stage, the setup requires two assumptions: First, all infected agents finally recover. As recovered agents are assumed to be immune and no longer infectious, omitting death does not bias further infection events. Immunity is currently a matter of intensive research and discussion. Although there is indication that immunity might be neither absolute nor permanent, it is assumed both for simplicity and lack of evidence that recovered agents are immune (Edridge et al., 2020; Wu et al., 2020). Second, it is assumed that every infected person has sufficient space, economic wealth and social support to withdraw and live in full quarantine for two weeks. Finally, births and natural deaths are not included in the model because simulations cover a relatively short period of time. #### Infection events In Episim, infection events are based on a probabilistic model and occur solely in containers. These containers are created from OSM data and the agent activities are generated by MATSim. Each container is assigned to a certain category, such as Work, Minibus Taxis or Leisure, and is associated with a certain infectivity parameter. For example, a crowded and unventilated minibus taxi is more likely to be the site of an infection event than a sprawling public park. The contact intensities are depicted in Table 3 and are taken from the original Episim configuration. As the original framework was configured for Berlin, Germany, some values were adjusted to reflect regional conditions in the NMBN. The parameter for home-based activities was increased from 3 to 6 in consideration of larger household sizes and general living conditions. The value for minibus taxis providing public transport services was set to 20 (the original value for public transport was 10 and referred to public transport in Berlin). Once a susceptible and a contagious agent are in the same container, an infection occurs with a certain probability described by $${P}_{n,t}=1-\exp [-\theta \mathop{\sum}\limits_{m\ne n}{q}_{m,t}{i}_{nm,t}{\tau }_{nm,t}].$$ (1) Here, Pn,t denotes the infection probability of susceptible agent n at day t in a contact with an infectious agent m. The infection probability is determined by three contact-specific parameters qm,t, inm,t and τnm,t. Moreover, the parameter θ is a calibration parameter and is used to adjust simulated to real infection numbers in a subsequent calibration process. q denotes the shedding rate, i.e., the general infectivity of the virus, which could vary by time and agent. i is an activity-specific parameter that is intended to take into account the different contact intensities at different locations. τ denotes the time that the agents m and n spent together in the same container. In the present analysis, θ is set to 1.5 × 10−7 (see Figs. 5 and 6) and q is assumed to be equal for all simulated contacts. The chosen parameters for i are depicted in Table 3. Since it is unlikely that a person in a container interacts with every other person, each agent can infect at most three other agents. #### Policy interventions The South African government has published a catalogue of countermeasures for five different levels of lockdowns (South African Department of Health, 2020), which are summarised in Table 4. The restrictions aim to mitigate social contacts by limiting economic, educational or social activities to a reasonable level. Table 4 clusters the proposed measures into groups related to the activity groups used in the simulations. The qualitative policy descriptions were translated into quantitative restriction parameters to approximate the impact of the different levels on economic and social life (see Table 3). This approach requires a number of assumptions, such as that the entire population is able and willing to follow the instructions. However, again following the idea of a conservative approach, these assumptions should result in an underestimation of future case numbers and still allow for several conclusions. The translation of qualitative statements into quantitative numbers is by nature prone to errors and the results must be interpreted with respect to this uncertainty. However, the catalogue provides valuable points of reference and makes it possible not only to put the various measures for the agents’ activities in an ordinary relationship, but to refer them to each other. Minor misinterpretations can be cured by adjusting the calibration parameter in a subsequent step of the model calibration. The Episim framework enables the simulation of non-medical policy interventions of different types and severity. By reducing the prevalence of certain activities carried out by the MATSim agents during the day, the impact of different real-world measures can be reproduced or anticipated in the simulations. The activities that an agent performs during the day are taken from the plans file of the MATSim output and correspond with locations from the OSM network file. In Episim, these locations are used to create containers of certain categories, such as Home, Work, Leisure or Shopping. By default, all activity parameters are equal to 1 and thus all agents perform activities in full accordance with their designated plans. If a government decided to fully restrict only leisure activities, the corresponding activity parameter for Leisure would be 0. Accordingly, the decision not to allow half of the workforce to work would result in an activity parameter of 0.5. As a part of its COVID-19-strategy, the government of South Africa has published a catalogue of lockdown rules with different levels (South African Department of Health, 2020). Level 5 corresponds to very strict regulations, such as curfews from 8 p.m. to 5 a.m. and orders to stay at home except for absolutely necessary reasons. Public transport is severely restricted, as are non-essential occupations. On the contrary, level 1 advises the population to behave considerately and follow general personal measures such as wearing masks and keeping a distance of 2 m, but does not restrict personal movement or economic activity in general. For an impact evaluation of lockdown levels on the infection dynamics the published regulations have been translated into activity parameters, which are used to incorporate the effect of reduced activities during the lockdown periods (see Table 3). #### Epidemic calibration The policy parameters are then included in a series of calibration simulations that are adjusted to real infection data by varying the parameter θ in the infection equation (1). Accordingly, changes in θ affect the probability of infection between two agents. Since real case numbers for the NMBM are not available, values were calculated on the basis of the number of cases in South Africa and as an approximation according to the percentage of the population (ECDC, 2020). θ enters the infection equation non-linearly and affects all infectivity parameters of the activities equally to maintain their order. The parameter θ was adjusted in an iterative process with the intention of obtaining a value that yields an infection count that approximates case numbers in real data, is robust to small changes, and does not systematically overestimate case numbers or the rate of spread. For the analyses, θ was set to 1.5 × 10−7. The simulations start with ten randomly selected infected agents and the onset of infections is adjusted to the beginning of May. Here, the number of official cases exceeded the threshold of 100, which, given the 10% population sample, corresponds to 10 infected agents. The simulated infection chains are adjusted to the number of reported cases until 27 July 2020 (see Figs. 5 and 6). The current state of research is not yet conclusive by how much the real number of cases exceeds the reported cases. Thus, the simulations were calibrated to the reported cases in a conservative approach, intentionally under- but not overestimating the true infection dynamics. By that, the results must be interpreted with caution as they represent a hypothetical scenario without unreported cases. This approach allows specific conclusions to be drawn on correlations between non-medical countermeasures and infection dynamics. The stochastic selection of initially infected agents introduces some between-run variance due to changes in their daily social contacts. The selection of ten instead of one infected agent already mitigates this bias and the analysis is also based on an ensemble of 100 simulations. Figures 5 and 6 depict 100 different random realisations and their averages for θ = 1.5 × 10−7 and variations of 5% and 10% in both directions to check the sensitivity of the models. Table 5 presents the evaluation of the different scenarios by sum of squares of errors and slopes of regression degrees. The evaluation periods were set from the beginning of June and the beginning of July until 27 of July. The earlier periods are not used because they are probably too distorted by the modified initial conditions (ten randomly infected agents on 01 May). Although the quality measures do not allow for an unambiguous ranking of the results, 1.5 × 10−7 was chosen as calibration parameter. 1.5 × 10−7 has only the second or third best squared sum of errors (SSEs) values in the model comparisons, but the slope of the longer-term regression degrees is smaller but similar to the value of the reported cases. A slightly higher θ might result in a better fit, but runs the risk of overestimating rather than underestimating the real cases, which can be observed in particular in the regression coefficients for the daily new cases in July. ## Results The main results of the simulations are depicted in Figs. 7 to 10 and are presented in form of a simulated time series of active corona cases per day. The cumulative number of recoveries is additionally displayed as it allows to check for robustness and adds a cumulative perspective. Each figure is divided into six panels, representing one possible policy scenario each, with lockdown measures ranging from level 1 (no or lax measures) to level 5 (strict measures). The lockdown intensities are taken from an official scenario description by the South African government (South African Department of Health, 2020) (see Table 4) and differentiate several activity types such as work, leisure or education. All simulations started on 12 May 2020, when the number of active infections in the real-world equalled 100 and a level 4 lockdown was in place. Level 3 measures were introduced on 1 June and reinforced with new school closures on 27 July 2020. When conducting the simulations, it was assumed that this tightened lockdown would last for 30 days until 26 August 2020 to have a noticeable effect. From August 26, different durations of the proposed lockdown levels will be simulated to analyse the impact of early or delayed lifting of the different measures. The general shape of the infection curves is similar for all results. Until 26 August 2020, a sustained exponential with few deviations can be observed. From 26 August 2020 on, the curves differ in terms of infection dynamics. As expected, strict and long lockdowns tend to decrease the active case numbers, while lax and short lockdown lead to a faster spread of the virus. Figure 7 depicts the result for additional 30 day lockdown periods from 26 August until 25 September 2020. The period prior to the anticipated measures was characterised by a strict level 3 lockdown. Thus, as expected, reducing the lockdown intensity to level 1 or 2 leads to increased virus activity and an approximate tenfold increase in active cases within 30 days. Maintaining the stricter level 3 measures leads in a downward sloping curve. Obviously, the mitigating impact of the measures requires some time before it becomes visible. Stricter measures are able to slightly reduce the number of active infections. In all six measure cases, the subsequent end of all measures on 25 September 2020 leads to an accelerated increase in active numbers. In all scenarios, the infection curves peak between October and November, which due to high numbers of both infected and recovered and thus a large number of immune persons. Notably, the peak of infected individuals in all scenarios occurs at about the same time at the intersection with the recovery curve, with both variables having scenario-dependent values between 200,000 and 300,000 representing between half and two-thirds of the NMBM population. A change from from 30 to 60 days of simulated additional measures (Fig. 8) leads to strong effects on case numbers. The general trends apply, as described above, in terms of accelerated growth in numbers for lax lockdown levels and a decline in the stricter scenarios. Although this decline is not sufficient to mitigate the case numbers even after the measures end on 25 October 2020, the peak of infections is shifted several weeks into the future and occurs in early 2021. Figures 9 and 10 refer to 90 and unlimited days of lockdowns, respectively. Consistent with the previous comparison, the extended strict measures are able to reduce case numbers. In scenarios with unlimited measure duration, the virus dies out under a level 5 lockdown by the end of 2020 in most simulations. However, ending the lockdown even at low case numbers leads to a return to the exponential growth path in the surviving infection chains. Table 6 provides an overview on the infection locations by computing the share of infections per activity class for all lockdown levels and durations. In all configurations most agents are infected at home (~96% to 86%), followed by minibus taxis (17% to 0%) and educational activities (11% to 2%). Despite some minor variations, mainly due to the stochastic part of the simulations and the relative character of the values shown, a higher lockdown level and a longer lockdown duration results in a lower share of out-of-home activities. Since more agents are forced to stay home under a harsh lockdown, they are more often infected by returning infectious family members. This trend is amplified by the comparatively high average and maximum household size (see Table 1). The prevalence of infections per activity class generally correlates with the infectivity parameters (see Table 3) and the prevalence of activities (see Table 2). The infectivity parameters for minibus taxis and primary education activities were set to comparably high values and education activities are the second most frequent activity class after other activities. The shift of infection locations from out-of-home activities to the agents’ homes with stricter measures is much less evident or even imperceptible for the short lockdown durations of 30 and 60 days compared to the other groups. This is due to a comparably long period without any measures following the corresponding lockdown period, which then balances the values again. ## Discussion Most simulation configurations with lenient measures indicate a peak of active cases in autumn 2020. From then on, the dynamics of the epidemic slow down, as both currently infected and recovered (and thus non-susceptible) agents account for about half and up to two-thirds of the population, respectively. This proportion is in line with current epidemiological estimates of the required thresholds for herd immunity (Fontanet and Cauchemez, 2020; Kwok et al., 2020; Randolph and Barreiro, 2020). The development of infection numbers over time is determined by the level and duration of the lockdown. In general, continuous and strict lockdowns can be confirmed as effective measures to flatten infection curves and postpone large numbers of cases into the future or even eventually eradicate the virus (Tian et al., 2020; Vasconcelos et al., 2020). This finding is consistent with the observed effectiveness of the stringent measures in Wuhan, China (Lau et al., 2020). With lax measures, the effect of subsequent exponential growth of infected individuals over time outweighs the effect of reduced social contacts (Anderson et al., 2020). The measures that are sufficient to produce decreasing case numbers in the simulations are between a strict and a lax level 3 variant. However, since θ and other parameters were chosen to underestimate case numbers, the actual level required for containment could also be at level 4 or 5. In any case, it can be assumed that level 2 measures will prove insufficient in a period of rapidly increasing case numbers. In all scenarios, a complete termination of the lockdown ultimately leads to a renewed increase in infections. Except for a comparatively long and strict lockdown scenario, the measures are not sufficient to eradicate the virus. However, they can considerably reduce the number of new cases and therefore relieve the burden on the health system. The strict measures can thus prove to be a means of gaining time to expand the capacities of the health system and to research medical treatment and prevention measures. However, as the case numbers of any active epidemic return to an exponential growth path after the measures have ended, an on-and-off strategy of strict measures could prove risky and of little benefit. The results presented above were generated based on an agent-based transport simulation framework. Agent-based transportation simulations typically aim to model realistic human behaviour at a micro-level in relation to their environment. As the artificial agents interact in the network, contact durations can be tracked as important infection parameters and used to simulate infection chains. Agent-based modelling can easily integrate available data, but the approach also allows bridging data gaps with theory-based assumptions. As an example, agents’ daily movements from home to work can be modelled either using detailed travel diaries or, if no such data are available, based on the assumptions, e.g., that humans act in a utility maximising way, i.e., choose the fastest route or the most appropriate transport mode. The idea of MATSim is to link the different activities of agents throughout the day with appropriate means of transport. The population data used in this analysis contain information about the agents’ daily schedules and the class of each activity performed. Thus, such models can pass more detailed lockdown parameters to the simulation than other current approaches. However, choosing and calibrating these parameters is challenging because their exact values are usually unknown and can hardly be translated into quantitative numbers. Other contributions address the problem by using, e.g., mobility data or severity indices or simply not distinguishing between constraints for different activities (Davids et al., 2020; Silva et al., 2020). The present simulation setup is associated with a number of limitations resulting from the assumptions made for calibration. The main role of the utilised policy parameters is to balance the infection locations by class and to model the impact of specific policies and network effects. Deriving the policy parameters from the government’s published policy catalogue thus puts the parameters in a sufficiently accurate context. However, unlike the simulated agents, many people are not able to follow the regulations accurately, so real-world out-of-home infections may be underestimated (with the exception of education activities, which are easier to monitor). In the epidemic simulation, 100 initially infected people or 10 randomly selected infected agents were assumed. The higher number of initially infected agents reduced the variance in the ensemble of realisations. However, this procedure creates artificial conditions at the beginning of the simulations and distorts the natural infection process in the first few days, thus reducing the validity of the results. The short-term distorting effect was accepted in favour of a sustainably lower variance in the simulations. The Episim simulations are finally calibrated by adjusting the calibration parameter θ. The strategy of underestimating infection numbers leads to a systematic underestimation of daily case numbers in the simulations and to more (still) susceptible agents than in reality. Accordingly, the simulated scenarios are likely to have a delayed peak of infections and a prolonged epidemic course. The 27 July 2020 was chosen as the cut-off date for the calibration. By the date of writing, official case numbers in South Africa peaked at ~14,000 (daily new) or 350,000 to 450,000 (cumulative) at the end of July and have declined log-linearly since then. In September, during a period of comparatively lax measures, daily new cases consolidated between ~1000 and 2000, with a cumulative total of 650,000 cases. One possible interpretation of the simulated results in relation to these developments is that, as intended, the simulations underestimated the infection dynamics and that the peak in new cases observed by South Africa in July 2020 is related to the peaks in the simulations dating to autumn 2020 at the earliest. In this case, with a population of about 58 million and an estimated required proportion of insusceptible agents of at least 50% during the peak, about 98% of all infections in South Africa would have been unreported. This value is higher than previous estimates of up to 87% of unreported cases (Hao et al., 2020; Pei, 2020), but could be explained by the demographic and economic structure of South Africa. The observed peak in July and August (see Fig. 11) could turn out to be an intermediate peak, followed by a temporary decline in case numbers. On the one hand, the consolidation of cases in September could give support to this hypothesis and might denote the beginning of a second period with higher infection numbers. On the other hand, comparatively lax level 2 measures were applied during this time and could slow down the decline. Other possible explanations relate to unobserved epidemic parameters that were not included in the simulation, such as environmental influences (Bontempi et al., 2020), changes in hygiene and social distance behaviour, weather changes or mutations of the virus. ## Conclusion The results stress that from a purely epidemiological point of view, a strict and long lockdown is a theoretically first-best countermeasure in all simulations. A strict lockdown leads to significantly fewer infections compared to all other scenarios and does thus reduce the impact on the limited healthcare system capacities. Under the assumptions discussed above, the simulations suggest that a certain combination of chosen intermediate measures might be sufficient to consolidate the number of active cases. The results cannot confirm beyond doubt President Cyril Ramaphosa’s statement that infections have peaked. However, assuming a high proportion of unreported cases and in view of the intended underestimation of infection dynamics, the simulations carefully indicate a soundness of this assumption. ## Data availability The simulation setup used for this study can be downloaded from the following link: https://www.comses.net/codebases/d4e5bb89-973d-486b-ab46-26781306ffc9/releases/1.0.0/. The original epidemic simulation framework used during the current study is available under https://github.com/matsim-org/matsim-episim/tree/d796bc4bfdff27d9112f6de5932b7615c9a0420a. The population data analysed during the current study are available under https://doi.org/10.17632/dh4gcm7ckb.1. ## References • Al Jazeera (2020) COVID-19: what people with HIV should know. https://www.ajmc.com/newsroom/covid19-questions-hivpositive-individuals-want-answered. Accessed 07 Apr 2020 • Algoa Bus Company Official Homepage. https://www.algoabus.co.za/. Accessed 05 Apr 2020 • Anderson RM, May RM (2010) tious diseases of humans: dynamics and control. Reprinted. Oxford Univ. Press, Oxford • Anderson RM, Heesterbeek H, Klinkenberg D, Hollingsworth TD (2020) How will country-based mitigation measures influence the course of the COVID-19 epidemic? Lancet 395(10228):931–934 • Ataguba JE (2020) COVID-19 Pandemic, a War to be Won: understanding its Economic Implications for Africa. Appl Health Econ Health Policy 18:325–328 • Bannon I, Collier P (2003) Natural resources and violent conflict: options and actions. https://openknowledge.worldbank.org/handle/10986/15047. Accessed 07 Feb 2021 • Birhanu A, Feyisa TO, Chala G (2020) The proportion of asymptomatic cases among SARS-CoV-2 infected patients: a systematic review. Eur J Clin Biomed Sci 6(5):84–89 • Bischoff J, Maciejewski M, Nagel K (2017) City-wide shared taxis: a simulation study in Berlin. In: IEEE 20th International Conference on Intelligent Transportation Systems (ITSC). IEEE. pp. 275–280 • Bloomberg (2020) Exposure to Covid-19 reaches 40% among some Cape Town residents. https://www.iol.co.za/news/south-africa/western-cape/exposure-to-covid-19-reaches-40-among-some-cape-town-residents-81591c85-c980-45eb-baea-62110cdec428. Accessed 21 Sept 2020 • Bontempi E, Vergalli S, Squazzoni F (2020) Understanding COVID-19 diffusion requires an interdisciplinary, multi-dimensional approach. Environ Res 188:109814 • Bossert A, Kersting M, Timme M, Schröder M, Feki A, Coetzee J, Schlüter J (2021) Limited containment options of COVID-19 outbreak revealed by regional agent-based simulations for South Africa. F1000Research 10:98 • Chinazzi M, Davis JT, Ajelli M, Gioannini C, Litvinova M, Merler S, Pastore y Piontti A, Mu K, Rossi L, Sun K, Viboud C, Xiong X, Yu H, Halloran ME, Longini Jr IM, Vespignani A (2020) The effect of travel restrictions on the spread of the 2019 novel coronavirus (COVID-19) outbreak. Science 368(6489):395–400. https://doi.org/10.1126/science.aba9757 • Davids A, Du Rand G, Georg CP, Koziol T, Schasfoort JA (2020) Social Learning in a Network Model of Covid-19. medRxiv. https://doi.org/10.1101/2020.07.30.20164855 • Department Statistics South Africa (2019b) Work & Labour Force. http://www.statssa.gov.za/?page_id=737&id=1. Accessed 07 Apr 2020 • Department Statistics South Africa-Republic of South Africa (2019a) Five facts about poverty in South Africa. http://www.statssa.gov.za/?p=12075. Accessed 07 Apr 2020 • Dignum F, Dignum V, Davidsson P, Ghorbani A, van der Hurk M, Jensen M, Kammler C, Lorig F, Ludescher LG, Melchior A, Mellema R, Pastrav C, Vanhee L, Verhagen H (2020) Analysing the combined health, social and economic impacts of the corovanvirus pandemic using agent-based social simulation. Mind Machine 1–18. https://doi.org/10.1007/s11023-020-09527-6 • Edridge AWD, Kaczorowska JM, Hoste ACR, Bakker M, Klein M, Loens K, Jebbink MF, Matser A, Kinsella CM, Rueda P, Ieven M, Goossens H, Prins M, Sastre P, Deijs M, van der Hoek L (2020) Seasonal coronavirus protective immunity is short-lasting. Nat Med 26:1691–1693. https://doi.org/10.1038/s41591-020-1083-1 • Fontanet A, Cauchemez S (2020) COVID-19 herd immunity: where are we? Nat Rev Immunol. https://doi.org/10.1038/s41577-020-00451-5 • Gengler I, Wang JC, Speth MM, Sedaghat AR (2020) Sinonasal pathophysiology of SARS-CoV-2 and COVID-19: a systematic review of the current evidence. Laryngoscope Investigative Otolaryngology. https://doi.org/10.1002/lio2.384 • Gomez J, Prieto J, Leon E, Rodriguez A (2020) INFEKTA: a general agent-based model for transmission of infectious diseases: studying the COVID-19 propagation in Bogota-Colombia. https://doi.org/10.1101/2020.04.06.20056119 • Greenstone M, Nigam V (2020) Does social distancing matter? University of Chicago, Becker Friedman Institute for Economics Working Paper (2020-26). https://doi.org/10.2139/ssrn.3561244 • Grimm V, Berger U, DeAngelis DL, Polhill JG, Giske J, Railsback SF (2010) The ODD protocol: a review and first update (2010). Ecol Model 221(23):2760–2768 • Grimm V, Berger U, Bastiansen M, Eliassen F, Ginot S, Giske V, Goss-Custard J, Grand T, Heinz SK, Huse GHA, Jepsen JU, Jørgensen C, Mooij WM, Müller B, Pe’er G, Piou C, Railsback SF, Robbins AM, Robbins MM, Rossmanith E, Rüger N, Strand E, Souissi S, Stillman RA, Vabø R, Visser U, DeAngelis DL (2006) A standard protocol for describing individual-based and agent-based models. Ecol Model 198(1):115–126 • Hackl J, Dubernet T (2019) Epidemic spreading in urban areas using agent-based transportation models. Future Internet 11:92 • Hao X, Cheng S, Wu D, Wu T, Lin X, Wang C (2020) Reconstruction of the full transmission dynamics of COVID-19 in Wuhan. Nature 584(7821):420–424 • Hethcote HW (2000) The mathematics of infectious diseases. SIAM Rev 42(4):599–653 • Hogan AB, Jewell BL, Sherrard-Smith E, Vesga JF, Watson OJ, Whittaker C, Hamlet A, Smith JA, Winskill P, Verity R, Baguelin M, Lees JA, Whittles LK, Ainslie KEC, Bhatt S, Boonyasiri A, Brazeau NF, Cattarino L, Cooper LV, Coupland H, Cuomo-Dannenburg G, Dighe A, Djaafara BA, Donnelly CA, Eaton JW, van Elsland SL, FitzJohn RG, Fu H, Gaythorpe KAM, Green W, Haw DJ, Hayes S, Hinsley W, Imai N, Laydon DJ, Mangal TD, Mellan TA, Mishra S, Nedjati-Gilani G, Parag KV, Thompson HA, Unwin HJT, Vollmer MAC, Walters CE, Wang H, Wang Y, Xi X, Ferguson NM, Okell LC, Churcher TS, Arinaminpathy N, Ghani AC, Walker PGT, Hallett TB (2020) Potential impact of the COVID-19 pandemic on HIV, tuberculosis, and malaria in low-income and middle-income countries: a modelling study. Lancet Global Health 8(9):e1132–e1141 • Horni A, Nagel K, Axhausen KW (2016) The multi-agent transport simulation MATSim. Ubiquity Press. https://doi.org/10.5334/baw • Hsiao M (2020) Sep 2020 COVID 19 ECHO clinic-YouTube. https://www.youtube.com/watch?v=ZH-nOWgSZBU. Accessed 21 Sep 2020 • Johns Hopkins Coronavirus Resource Center (2020) COVID-19 Map-Johns Hopkins Coronavirus Resource Center. https://coronavirus.jhu.edu/map.html. Accessed 27 Aug 2020 • Joubert JW (2018) Synthetic populations of South African urban areas. Data Brief 19:1012–1020 • Joubert JW (2014) Population generation. https://matsim.atlassian.net/wiki/spaces/MATPUB/pages/15269933/Nelson+Mandela+Bay. Accessed 01 Mar 2021 • Keeling MJ, Rohani P (2008) Modeling infectious diseases in humans and animals. https://doi.org/10.1093/bmb/ldp0410 • Kermack WO, Mc Kendrick A, Walker GT (1927) A contribution to the mathematical theory of epidemics. Proc R Soc London Ser A 115(772):700–721 • Kwok KO, Lai F, Wei WI, Wong SYS, Tang JWT (2020) Herd immunity-estimating the level required to halt the COVID-19 epidemics in affected countries. J Infect 80(6):e32–e33 • Lau H, Khosrawipour V, Kocbach P, Mikolajczyk A, Schubert J, Bania J, Khosrawipour T (2020) The positive impact of lockdown in Wuhan on containing the COVID-19 outbreak in China. J Travel Med. https://doi.org/10.1093/jtm/taaa037 • Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, Azman AS, Reich NG, Lessler J (2020) The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Int Med 172(9):577–582 • Martín-Calvo D, Aleta A, Pentland A, Moreno Y, Moro E (2020) Effectiveness of social distancing strategies for protecting a community from a pandemic with a data driven contact network based on census and real-world mobility data. https://covid-19-sds.github.io. Accessed 01 Mar 2021 • Müller SA, Balmer M, Neumann A, Nagel K (2020) Mobility traces and spreading of COVID-19. Technische Universität Berlin. https://doi.org/10.14279/depositonce-9835 • Neumann A, Röder D, Joubert JW (2015) Towards a simulation of minibuses in South Africa. J Transport Land Use 8(Feb. 1):137–154 • OpenStreetMap contributors (2017) OpenStreetMap. https://www.openstreetmap.org • Pei S (2020) SenPei-CU/COVID-19: COVID-19. https://doi.org/10.5281/ZENODO.3699624 • Phua J, Weng L, Ling L, Egi M, Lim CM, Divatia JV, Shrestha BR, Arabi Y, Ng J, Gomersall C, Nishimura M, Koh Y, Du B (2020) Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations. Lancet Respir Med. https://doi.org/10.1016/S2213-2600(20)30161-2 • Randolph HE, Barreiro LB (2020) Herd immunity: understanding COVID-19. Immunity 52(5):737–741 • Remuzzi A, Remuzzi G (2020) COVID-19 and Italy: what next? Lancet 395(10231):1225–1228 • Roche B, Drake JM, Rohani P (2011) An Agent-Based Model to study the epidemiological and evolutionary dynamics of Influenza viruses. BMC Bioinformatics 12(87). https://doi.org/10.1186/1471-2105-12-87 • Rocklöv J, Sjödin H, Wilder-Smith A (2020) COVID-19 outbreak on the Diamond Princess cruise ship: estimating the epidemic potential and effectiveness of public health countermeasures. J Travel Med. https://doi.org/10.1093/jtm/taaa030 • Scarselli D, Budanur NB, Timme M, Hof B (2021) Discontinuous epidemic transition due to limited testing. Nat Commun 12:2586. https://doi.org/10.1038/s41467-021-22725-9 • Schröder M, Bossert A, Kersting M, Aeffner S, Coetzee J, Timme M, Schlüter J (2021) COVID-19 in Africa-outbreak despite interventions? Sci Rep 11:4956 • Shi ZZ, Wu CH, Ben-Arieh D (2014) Agent-based model: a surging tool to simulate infectious diseases in the immune system. Open J Model Simul 02(01):12–22 • Silal S, Pulliam J, Meyer-Rath G, Nichols B, Jamieson L, Kimmie Z, Moultrie H (2020) Estimating cases for COVID-19 in South Africa Update: 19 May 2020. https://www.gov.za/sites/default/files/gcis_documents/SACMC_19052020.pdf. Accessed 21 Sept 2020 • Silva PCL, Batista PVC, Lima HS, Alves MA, Guimarães FG, Silva RCP (2020) COVID-ABS: an agent-based model of COVID-19 epidemic to simulate health and economic effects of social distancing interventions. Chaos Solitons Fract 139:110088 • South African Department of Health (2020) COVID-19 Risk Adjusted Strategy-SA Corona Virus Online Portal. https://sacoronavirus.co.za/covid-19-risk-adjusted-strategy. Accessed 08 Sept 2020 • South African Government (2020) School calendar. https://www.gov.za/about-sa/school-calendar?gclid=CjwKCAjwtNf6BRAwEiwAkt6UQsw-NyWjoig6o4bQYlG0XtPIK4KsE6i4NKLutXEJzP-6pE3Iu0iavRoCWbcQAvD_BwE. Accessed 07 Sept 2020 • Squazzoni F, Polhill JG, Edmonds B, Ahrweiler P, Antosz P, Scholz G, Chappin É, Borit M, Verhagen H, Giardini F, Gilbert N (2020) Computational Models that matter during a global pandemic outbreak: a call to action. J Artif Societ Soc Simul 23(2). https://doi.org/10.18564/jasss.4298 • Sugishita Y, Kurita J, Sugawara T, Ohkusa Y (2020) Effects of voluntary event cancellation and school closure as countermeasures against COVID-19 outbreak in Japan. PLoS ONE 15(12):e0239455. https://doi.org/10.1371/journal.pone.0239455 • te Vrugt M, Bickmann J, Wittkowski R (2020) Effects of social distancing and isolation on epidemic spreading modeled via dynamical density functional theory. Nat Commun 11:5576. https://doi.org/10.1038/s41467-020-19024-0 • Tian H, Liu Y, Li Y, Wu CH, Chen B, Kraemer MUG, Li B, Cai J, Xu B, Yang Q, Wang B, Yang P, Cui Y, Song Y, Zheng P, Wang Q, Bjornstad ON, Yang R, Grenfell BT, Pybus OG, Dye C (2020) An investigation of transmission control measures during the first 50 days of the COVID-19 epidemic in China. Science. https://doi.org/10.1126/science.abb6105 • Vasconcelos GL, Macêdo AMS, Ospina R, Almeida FAG, Duarte-Filho GC, Brum AA, Souza ICL (2020) Modelling fatality curves of COVID-19 and the effectiveness of intervention strategies. PeerJ 8:e9421. https://doi.org/10.7717/peerj.9421 • Verity R, Okell LC, Dorigatti I, Winskill P, Whittaker C, Imai N, Cuomo-Dannenburg G, Thompson H, Walker PGT, Fu H, Dighe A, Griffin JT, Baguelin M, Bhatia S, Boonyasiri A, Cori A, Cucunubá Z, FitzJohn R, Gaythorpe K, Green W, Hamlet A, Hinsley W, Laydon D, Nedjati-Gilani G, Riley S, van Elsland S, Volz E, Wang H, Wang Y, Xi X, Donnelly CA, Ghani AC, Ferguson NM (2020) Estimates of the severity of coronavirus disease 2019: a model-based analysis. Lancet Infect Dis 20(6):669–677 • VSP TU Berlin (2020) MATSim Episim development branch-snapshot commit: d796bc4bfdff27d9112f6de5932b7615c9a0420a 07.04.2020. https://github.com/matsim-org/matsim-episim/tree/d796bc4bfdff27d9112f6de5932b7615c9a0420a • Wikipedia (2020) COVID-19 pandemic in South Africa. https://en.wikipedia.org/w/index.php?title=COVID-19_pandemic_in_South_Africa&oldid=979348230. Accessed 20 Sept 2020 • World Health Organisation (2019) Global tuberculosis report 2019. https://doi.org/10.14279/depositonce-9835 • World Health Organisation (2020a) Coronavirus disease 2019 (COVID-19) Situation Report-56. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200316-sitrep-56-covid-19.pdf?sfvrsn=9fda7db2_6. Accessed 21 Sept 2020 • World Health Organisation (2020b) Coronavirus disease 2019 (COVID-19) Situation Report-72. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200401-sitrep-72-covid-19.pdf?sfvrsn=3dd8971b_2. Accessed 19 Sept 2020 • World Health Organisation (2020c) Coronavirus disease 2019 (COVID-19) Situation Report-77. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200406-sitrep-77-covid-19.pdf?sfvrsn=21d1e632_2. Accessed 19 Sept 2020 • Wu J, Liang B, Chen C, Wang H, Fang Y, Shen S, Yang X, Wang B, Chen L, Chen Q, Wu Y, Liu J, Yang X, Li W, Zhu B, Zhou W, Wang H, Li S, Lu S, Liu D, Li H, Krawczyk A, Lu M, Yang D, Deng F, Dittmer U, Trilling M, Zheng X(2021) SARS-CoV-2 infection induces sustained humoral immune responses in convalescent patients following symptomatic COVID-19. Nat Commun 12:1813. https://doi.org/10.1038/s41467-021-22034-1 ## Funding Open Access funding enabled and organized by Projekt DEAL. ## Author information Authors ### Corresponding author Correspondence to Jan Chr. Schlüter. ## Ethics declarations ### Competing interests The authors declare no competing interests. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Kersting, M., Bossert, A., Sörensen, L. et al. Predicting effectiveness of countermeasures during the COVID-19 outbreak in South Africa using agent-based simulation. Humanit Soc Sci Commun 8, 174 (2021). https://doi.org/10.1057/s41599-021-00830-w
{}
diabetes2 R Documentation ## Type 2 Diabetes Clinical Trial for Patients 10-17 Years Old ### Description Three treatments were compared to test their relative efficacy (effectiveness) in treating Type 2 Diabetes in patients aged 10-17 who were being treated with metformin. The primary outcome was lack of glycemic control (or not); lacking glycemic control means the patient still needed insulin, which is not the preferred outcome for a patient. ### Usage diabetes2 ### Format A data frame with 699 observations on the following 2 variables. treatment outcome Whether there patient still needs insulin (failure) or met a basic positive outcome bar (success). ### Details Each of the 699 patients in the experiment were randomized to one of the following treatments: (1) continued treatment with metformin (coded as met), (2) formin combined with rosiglitazone (coded as rosi), or or (3) a lifestyle-intervention program (coded as lifestyle). ### Source Zeitler P, et al. 2012. A Clinical Trial to Maintain Glycemic Control in Youth with Type 2 Diabetes. N Engl J Med. ### Examples lapply(diabetes2, table) (cont.table <- table(diabetes2)) (m <- chisq.test(cont.table)) m\$expected
{}
# Math Help - Linear Transformation Question. 1. ## Linear Transformation Question. Hi, I'm having some trouble with part 2 of this question. Any help would be appreciated. Thanks, Mike 2. Originally Posted by mslodyczka Hi, I'm having some trouble with part 2 of this question. Any help would be appreciated. Thanks, Mike Let A be the matrix for part 1. Find the eigenvalues for A by solving the characteristic equation. Find 3 orthonormal eigenvectors corresponding to the eigenvalues. These are the basis vectors for B'. Form matrix P from these basis vectors. Then P^T = P^-1 from orthonormality. The representation matrix is R = P^-1 A P, which is diagonal with the eigenvalues on the diagonal.
{}
# Integrate cos(lnx)dx 1. ### Cudi1 100 1. The problem statement, all variables and given/known data let u=lnx du=1/x*dx dv=cosdx v=-sin 2. Relevant equations Now im confused as im getting nowhere with this substiution, i learned the LIPTE rule but its quite confusing, i have a function within a function 3. The attempt at a solution 2. ### Cudi1 100 for integration by parts to work, i would need two differentiable functions, but the cosdx is not differentiable would it need to be cosxdx for it to be differentiated? 3. ### Dickfore Is the integral: $$\int{\cos{(\ln{(x)})} \, dx}$$ If it is, make the substitution: $$t = \ln{(x)} \Rightarrow x = e^{t}$$ and substitute everywhere. The integral that you will get can be integrated by using integration by parts twice, or, if you know complex numbers, by representing the trigonometric function through the complex exponential. 4. ### Cudi1 100 it is just cos(lnx)dx 5. ### Dickfore then proceed as I told you. 6. ### Cudi1 100 ok im getting an integral of the form : coste^tdt. is this correct? 7. ### Dickfore yes. proceed by integration by parts or using complex exponentials. 8. ### Cudi1 100 ok thank you for the help, quick question why do we have to let t=lnx? 9. ### Dickfore We have a composite function $\cos{(\ln{x})}$ of two elementary functions (trigonometric and logarithmic). As a combination, it does not have an immediate table integral. But, the method of substitution, which is nothing but inverting the chain rule for derivatives of composite functions, works exactly for such compound functions. 10. ### Cudi1 100 thank you, lastly i end up with 1/2(e^tcost-e^tsint), do i sub back , so that x=e^t and t=lnx which leaves me with : 1/2x(cos(lnx)-sin(lnx)+c? 11. ### Dickfore I think the sign in front of the sine should be + and the result should be: $$\frac{1}{2} x \left[\cos{\left(\ln{(x)}\right)} + \sin{\left(\ln{(x)}\right)}\right]+ C$$ 12. ### Cudi1 100 yes, made a slight mistake thank you very much for the help, i tried doing it another way by letting u=cos(lnx) and dv=dx and i arrived at the same answer 13. ### Dickfore yes, the x's will cancel.
{}
find the Christoffel symbols of the first or second kind for a metric tensor - Maple Programming Help Home : Support : Online Help : Mathematics : DifferentialGeometry : Tensor : DifferentialGeometry/Tensor/Christoffel Tensor[Christoffel] - find the Christoffel symbols of the first or second kind for a metric tensor Calling Sequences Christoffel(g, h, keyword) Parameters g       - a metric tensor on the tangent bundle of a manifold h       - (optional) the inverse of the metric g keyword - (optional) a keyword string, either "FirstKind" or "SecondKind" Description • The Christoffel symbol of the second kind for a metric $g$ is the unique torsion-free connection such that the associated covariant derivative operator $\nabla$ satisfies $\nabla g=0$. It can be represented as a 3-index set of coefficients: where  and ${g}^{\mathrm{ij}}$ are the components of the metric and its inverse, respectively, and where a comma indicates a partial derivative. • The Christoffel symbol of the first kind is the non-tensorial quantity obtained from the Christoffel symbol of the second kind by lowering its upper index with the metric: • The default value for the keyword is "SecondKind", that is, the calling sequence Christoffel(g) computes the Christoffel symbol of the second kind. • The inverse of the metric can be computed using InverseMetric. • This command is part of the DifferentialGeometry:-Tensor package, and so can be used in the form Christoffel(...) only after executing the command with(DifferentialGeometry) and with(Tensor) in that order.  It can always be used in the long form DifferentialGeometry:-Tensor:-Christoffel. Examples > with(DifferentialGeometry): with(Tensor): Example 1. First create a 2 dimensional manifold $M$ and define a metric $\mathrm{g1}$ on the tangent space of $M$. > DGsetup([x, y], M); ${\mathrm{frame name: M}}$ (2.1) > g1 := evalDG((1/y^2)*(dx &t dx + dy &t dy)); ${\mathrm{g1}}{:=}\frac{{\mathrm{dx}}{}{\mathrm{dx}}}{{{y}}^{{2}}}{+}\frac{{\mathrm{dy}}{}{\mathrm{dy}}}{{{y}}^{{2}}}$ (2.2) Calculate the Christoffel symbols of the first and second kind for $\mathrm{g1}$. > C1 := Christoffel(g1, "FirstKind"); ${\mathrm{C1}}{:=}{-}\frac{{\mathrm{dx}}{}{\mathrm{dx}}{}{\mathrm{dy}}}{{{y}}^{{3}}}{-}\frac{{\mathrm{dx}}{}{\mathrm{dy}}{}{\mathrm{dx}}}{{{y}}^{{3}}}{+}\frac{{\mathrm{dy}}{}{\mathrm{dx}}{}{\mathrm{dx}}}{{{y}}^{{3}}}{-}\frac{{\mathrm{dy}}{}{\mathrm{dy}}{}{\mathrm{dy}}}{{{y}}^{{3}}}$ (2.3) > C2 := Christoffel(g1, "SecondKind"); ${\mathrm{C2}}{:=}{-}\frac{{\mathrm{D_x}}{}{\mathrm{dx}}{}{\mathrm{dy}}}{{y}}{-}\frac{{\mathrm{D_x}}{}{\mathrm{dy}}{}{\mathrm{dx}}}{{y}}{+}\frac{{\mathrm{D_y}}{}{\mathrm{dx}}{}{\mathrm{dx}}}{{y}}{-}\frac{{\mathrm{D_y}}{}{\mathrm{dy}}{}{\mathrm{dy}}}{{y}}$ (2.4) > CovariantDerivative(g1, C2); ${0}{}{\mathrm{dx}}{}{\mathrm{dx}}{}{\mathrm{dx}}$ (2.5) > TorsionTensor(C2); ${0}{}{\mathrm{D_x}}{}{\mathrm{dx}}{}{\mathrm{dx}}$ (2.6) Example 2. Define an anholonomic frame on $M$ and use this frame to calculate the Christoffel symbol for a metric on the tangent space of $M$. > FR := FrameData([dx/(1+x^2+y^2), dy/(1+x^2+y^2)], M1); ${\mathrm{FR}}{:=}\left[{d}{}{\mathrm{Θ1}}{=}{2}{}{y}{}{\mathrm{Θ1}}{}{\bigwedge }{}{\mathrm{Θ2}}{,}{d}{}{\mathrm{Θ2}}{=}{-}{2}{}{x}{}{\mathrm{Θ1}}{}{\bigwedge }{}{\mathrm{Θ2}}\right]$ (2.7) > DGsetup(FR, [E], [sigma]); ${\mathrm{frame name: M1}}$ (2.8) > g2 := evalDG(sigma1 &t sigma1 + sigma2 &t sigma2); ${\mathrm{g2}}{:=}{\mathrm{σ1}}{}{\mathrm{σ1}}{+}{\mathrm{σ2}}{}{\mathrm{σ2}}$ (2.9) > C := Christoffel(g2); ${C}{:=}{-}{2}{}{y}{}{\mathrm{E1}}{}{\mathrm{σ2}}{}{\mathrm{σ1}}{+}{2}{}{x}{}{\mathrm{E1}}{}{\mathrm{σ2}}{}{\mathrm{σ2}}{+}{2}{}{y}{}{\mathrm{E2}}{}{\mathrm{σ1}}{}{\mathrm{σ1}}{-}{2}{}{x}{}{\mathrm{E2}}{}{\mathrm{σ1}}{}{\mathrm{σ2}}$ (2.10) > CovariantDerivative(g2, C); ${0}{}{\mathrm{σ1}}{}{\mathrm{σ1}}{}{\mathrm{σ1}}$ (2.11) > TorsionTensor(C); ${0}{}{\mathrm{E1}}{}{\mathrm{σ1}}{}{\mathrm{σ1}}$ (2.12)
{}
# Polynomial recurrence equation 1. Dec 29, 2004 ### Pietjuh For our combinatorics class there is a bonus excercise in which we have to solve the following recurrence relation: $$L_k(x) = L_{k-1}(x) + xL_{k-2}(x)$$ My first thought was to construct the following generating function: $$F(x,y) = \sum_{k=0}^{\infty} L_k(x) y^k$$. By putting in the recurrence relation I found a formula for this generating function: $$F(x,y) = \frac{1+xy}{1-y-xy^2}$$ Using the geometric series I found that this equals to: $$F(x,y) = (1+xy)\sum_{k=0}^{\infty} (y+xy^2)^k = \sum_{k=0}^{\infty} (1+xy)(xy^2)^k\frac{(1+xy)^k}{(xy)^k} = \sum_{k=0}^{\infty}\sum_{n=0}^k{k\choose n}(1+xy)x^ny^{n+k}$$ Now I've tried all afternoon to rewrite this whole powerseries in terms of y, so I can equate the generating function with this powerseries and find the required $$L_k(x)$$, but I could'nt do it :( Could someone give me a hint in which direction to look for now? 2. Dec 29, 2004 ### Hurkyl Staff Emeritus Reverse the order of summation. (the idea is analogous to reversing the order of integration in an iterated integral)
{}
Sum of Geometric Sequence/Examples/Common Ratio 1 Theorem Consider the Sum of Geometric Sequence defined on the standard number fields for all $x \ne 1$. $\ds \sum_{j \mathop = 0}^n a x^j = a \paren {\frac {1 - x^{n + 1} } {1 - x} }$ When $x = 1$, the formula reduces to: $\ds \sum_{j \mathop = 0}^n a 1^j = a \paren {n + 1}$ Proof When $x = 1$, the right hand side is undefined: $a \paren {\dfrac {1 - 1^{n + 1} } {1 - 1} } = a \dfrac 0 0$ However, the left hand side degenerates to: $\ds \sum_{j \mathop = 0}^n a 1^j$ $=$ $\ds \sum_{j \mathop = 0}^n a$ $\ds$ $=$ $\ds a \paren {n + 1}$ $\blacksquare$
{}
# Category:Simultaneous Equations This category contains results about Simultaneous Equations. Definitions specific to this category can be found in Definitions/Simultaneous Equations. A system of simultaneous equations is a set of equations: $\forall i \in \set {1, 2, \ldots, m} : \map {f_i} {x_1, x_2, \ldots x_n} = \beta_i$ That is: $\ds \beta_1$ $=$ $\ds \map {f_1} {x_1, x_2, \ldots x_n}$ $\ds \beta_2$ $=$ $\ds \map {f_2} {x_1, x_2, \ldots x_n}$ $\ds$ $\cdots$ $\ds$ $\ds \beta_m$ $=$ $\ds \map {f_m} {x_1, x_2, \ldots x_n}$ ## Subcategories This category has only the following subcategory. ## Pages in category "Simultaneous Equations" The following 3 pages are in this category, out of 3 total.
{}
# Caption Contest: “Which end is up? No, I mean the tree.” 0 33 Did you know the Japanese word for “corrupt” is 破損した ? Thanks to CA for the photo, which has been lifted from DNA’s FB page, which lately has devolved into a forest of #OurNA hashtags and exclamation marks, because the less one has to say, the louder it gets: Shall we assume that second prize was four tickets? Will DNA be selecting a matching sombrero for this poor, doomed tree? Anyway, caption contest: Comment here or at Facebook. If I bother selecting a winner, I’ll post it randomly. Or not. Is Christmas over yet? SHARE
{}
## Friday, August 19, 2011 ### installer A few tricks to get out of jail with OS X installs and updates... If you have a package that just won't install due to bad voodoo (this can happen if you install a lot of seeds and the stars misalign) you can use this to force the install with this: sudo CM_BUILD=CM_BUILD COMMAND_LINE_INSTALL=1 installer -verbose -pkg MacOSXUpd10.6.5.pkg -target / If you need to install from an OS CD you can find the package to use for this trick in /Volumes/volname/System/Installatoin/Packages/OSInstall.mpkg One use for this is to force an install onto a partition that the OS doesn't understand.  I had my main drive triple-booted to OS X 10.5.8, Windows Vista (don't get me started) and Ubuntu 8.whatever.  Remaking this delicate balance without three OS reinstalls is virtually impossible, but the OS X Snow Leopard installer didn't want to install because it didn't understand the partition map. The fix on the net is to resize the OS X partition, which causes Disk Utility to fondle the partition map in some useful way, but there's no way I want to risk my other OS installs.  Installing the OS from the command line with a forced install lets me simply dump the OS onto the drive, and then rEFIt just works because, well, it's rEFIt.
{}
• Create Account ## MSAA issues. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 5 replies to this topic ### #1Gavin Williams  Members 964 Like 0Likes Like Posted 19 December 2012 - 02:19 PM Hi, I'm trying to setup MSAA to remove jittering at the edges of primitives/geometry. I have set the following : rsd.IsMultisampleEnabled = true; ... SampleDescription = new SampleDescription(8, 0), on my G-Buffer textures and on my depth buffer. But I get the following errors : D3D11 ERROR: ID3D11Device::CreateDepthStencilView: The base resource was created as a multisample resource. You must specify D3D11_DSV_DIMENSION_TEXTURE2DMS instead of D3D11_DSV_DIMENSION_TEXTURE2D. [ STATE_CREATION ERROR #143: CREATEDEPTHSTENCILVIEW_INVALIDDESC] D3D11 ERROR: ID3D11Device::CreateDepthStencilView: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #148: CREATEDEPTHSTENCILVIEW_INVALIDARG_RETURN] This doesn't make sense to me, because i don't want to use Texture2DMS, I'm not using the sampling results, nor making my own sampler, I'm just trying to use the built in MSAA. And so for the sake of resolving the error I change my DSV : DepthStencilViewDescription dsViewDesc = new DepthStencilViewDescription { ArraySize = 0, Format = depthFormat, Dimension = DepthStencilViewDimension.Texture2DMultisampled, MipSlice = 0, Flags = DepthStencilViewFlags.None, FirstArraySlice = 0, }; Then I get this error : D3D11 ERROR: ID3D11DeviceContext::Draw: The Shader Resource View dimension declared in the shader code (TEXTURE2D) does not match the view type bound to slot 1 of the Pixel Shader unit (TEXTURE2DMS). This mismatch is invalid if the shader actually uses the view (e.g. it is not skipped due to shader code branching). [ EXECUTION ERROR #354: DEVICE_DRAW_VIEW_DIMENSION_MISMATCH] That doesn't make sense to me, because the depth stencil view isn't in shader code ? Am I doing something wrong here ? Do I have to use Texture2DMS to use MSAA ? Should I just use super-sampling ? Thanks for any help. ### #2MJP  Moderators 18235 Like 3Likes Like Posted 19 December 2012 - 04:16 PM From what you described, it sounds like you're using deferred rendering. With deferred rendering you need to do all kinds of special-case handling for MSAA, it's not just something you can "switch on" and have it work. Not only does it require changing your shaders just to avoid runtime errors, but you need to do things like edge detection to make it more efficient than just supersampling. For your first error, you need to set the DSV dimension to Texture2DMS if you texture you created is multisampled. It doesn't matter whether or not you want to sample it later as a multisampled texture, the DSV still needs to have the proper dimension. For yours second error, it sounds like you're sampling the depth buffer through a shader resource view in one of your shaders using a Texture2D in your shader code. Any non-resolved texture with multisampled has be accessed as a Texture2DMS in the shader, you can't use Texture2D. Color MSAA textures can be resolved to a non-MSAA texture which can then be sampled using Texture2D, but you can't do that with a depth buffer (and you don't want tor resolve your MSAA textures anyway for deferred rendering). ### #3Gavin Williams  Members 964 Like 0Likes Like Posted 19 December 2012 - 07:40 PM I don't yet do any MSAA resolve, could that be the cause of this last error ? Yes I'm using deferred rendering (lighting). MJP, I'm reading your Quick Overview of MSAA and looking over some other posts about the subject, I think I have seriously avoided this topic in the past, but I'll need to work it out, because the edge jittering is terrible. I'm going to start with a 2x2 super-sampling approach. That seems easiest and my graphics isn't too demanding at the moment so it seems like a good reference technique. I'm looking for a DirectX11 resolve function but all I've found so far are custom shaders to resolve the Texture2DMS by accumulating the sub-samples. (I'm trying to use the language, sry if I'm not spot on with terms). ### #4MJP  Moderators 18235 Like 1Likes Like Posted 20 December 2012 - 12:20 AM The built-in resolve function is ResolveSubresource. However you can't resolve a depth buffer, and it wouldn't be useful anyway even if you could. The simplest way to implement MSAA with deferred rendering to have a separate MSAA version of your lighting shader(s), and in that version use Texture2DMS. Then in the shader just loop over all subsamples for a given pixel, calculate the lighting, and average the result. This will be much more expensive than it could be, but it will work. Edited by MJP, 20 December 2012 - 12:21 AM. ### #5Gavin Williams  Members 964 Like 1Likes Like Posted 21 December 2012 - 07:36 PM Hmm, I have decided not to worry about the 2x2 supersampling. MSAA seems to be more efficient in accomplishing the same result. I understand how it works and I understand how to resolve (a color map). But when I start thinking about depth maps and normal maps and why you have suggested using MSAA in the lighting shader, I start getting confused. For instance, in the simple case where my wall edges are against the floor or where the wall top-edge is against the top of the wall section (which is visible, ala Gauntlet / Desktop Dungeons) then the normals and position maps, and depth maps can easily be resolved, because there is continuity from one triangle to the next. But what about when I add a wall section with a passage / doorway, then the top of the doorway is no longer next to the ground so their is a discontinuity from the wall triangle to the ground triangle next to it. Likewise for any geometry that is adjacent but separate from the ground in screen space. At the moment, that's a real puzzle for me. And i suspect that this is getting into fairly advanced territory. My best guess at an approach is to do the following : Add an extra step to my wall rendering ... that being to render the walls' color output into a Texture2DMS then resolve to a Texture2D and just pass that back into my regular pipeline for the lights and final composition to use. And leave the wall shaders normal and position output as it is. So there will be a slight misalignment between my color map and my normal / depth maps. Any wall edge that connects with the ground will be handled ok, but edges of elevated objects will be smooth (in color), but the lighting shaders will use the ground normals to light the edge. It doesn't sound perfect, but if it gets rid of the jittering, that might be ok. Here's a video of the problem .. I suspect that there may be a couple of ways to actually implement MSAA and so I might see it being used one way, quite different to how it's often used. I am going to have to just implement something and see the results and work from there I'd say. I'll talk about it more after I've at least implemented one version. Thanks. Edited by Gavin Williams, 21 December 2012 - 07:38 PM. ### #6Gavin Williams  Members 964 Like 1Likes Like Posted 22 December 2012 - 12:45 PM Super-sampling 2x2 improved things, particularly around the ground-wall interface, where there is similar lighting. But edges that had moderate/strong lighting adjacent to zero lighting didn't improve that much, which breaks my idea of ignoring the normals. I also reworked the textures to remove hi-lighting at the edges, which I think helps a little as well, but I don't think that's a good thing to start doing, because really it's about contrast between textures, not the textures themselves, and I would then have to create many more textures to handle all the wall shapes if i was to darken around the edges, as I'm currently using a single texture for all walls. A lot of work compared to just employing MSAA. Performance is fine using 2x2 SSAA. Super-sampling 4x4 further reduced the edge aliasing to almost being unnoticeable, but it produced severe texture aliasing, I assume because the linear sampler only samples from 2x2 pixels, not 4x4. I'll have to check this. That would explain why I've seen code for manual texture filtering. Also with 4x4, the 5 buffers (color,normal,position,depth,final-light-map) are 7680x4068 each, so 625MB. I don't think that's very good, and the performance is bad, with a noticeably low frame rate. Next, onto MSAA. Edited by Gavin Williams, 22 December 2012 - 12:53 PM. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
{}
An element with atomic mass Z consists of two isotopes of mass number Z-1 and Z + 2. The percentage abundance of the heavier isotope is - (A) 0.25 (B)33.3 (C)66.6 (D)75
{}
# Properties Label 276.1.h.b.275.1 Level 276 Weight 1 Character 276.275 Self dual yes Analytic conductor 0.138 Analytic rank 0 Dimension 1 Projective image $$D_{2}$$ CM/RM discs -23, -276, 12 Inner twists 4 # Related objects ## Newspace parameters Level: $$N$$ = $$276 = 2^{2} \cdot 3 \cdot 23$$ Weight: $$k$$ = $$1$$ Character orbit: $$[\chi]$$ = 276.h (of order $$2$$, degree $$1$$, minimal) ## Newform invariants Self dual: yes Analytic conductor: $$0.137741943487$$ Analytic rank: $$0$$ Dimension: $$1$$ Coefficient field: $$\mathbb{Q}$$ Coefficient ring: $$\mathbb{Z}$$ Coefficient ring index: $$1$$ Twist minimal: yes Projective image $$D_{2}$$ Projective field Galois closure of $$\Q(\sqrt{3}, \sqrt{-23})$$ Artin image $D_4$ Artin field Galois closure of 4.2.3312.1 ## Embedding invariants Embedding label 275.1 Root $$0$$ Character $$\chi$$ = 276.275 ## $q$-expansion $$f(q)$$ $$=$$ $$q+1.00000 q^{2} -1.00000 q^{3} +1.00000 q^{4} -1.00000 q^{6} +1.00000 q^{8} +1.00000 q^{9} +O(q^{10})$$ $$q+1.00000 q^{2} -1.00000 q^{3} +1.00000 q^{4} -1.00000 q^{6} +1.00000 q^{8} +1.00000 q^{9} -1.00000 q^{12} -2.00000 q^{13} +1.00000 q^{16} +1.00000 q^{18} -1.00000 q^{23} -1.00000 q^{24} -1.00000 q^{25} -2.00000 q^{26} -1.00000 q^{27} +1.00000 q^{32} +1.00000 q^{36} +2.00000 q^{39} -1.00000 q^{46} +2.00000 q^{47} -1.00000 q^{48} -1.00000 q^{49} -1.00000 q^{50} -2.00000 q^{52} -1.00000 q^{54} +2.00000 q^{59} +1.00000 q^{64} +1.00000 q^{69} -2.00000 q^{71} +1.00000 q^{72} +2.00000 q^{73} +1.00000 q^{75} +2.00000 q^{78} +1.00000 q^{81} -1.00000 q^{92} +2.00000 q^{94} -1.00000 q^{96} -1.00000 q^{98} +O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/276\mathbb{Z}\right)^\times$$. $$n$$ $$97$$ $$139$$ $$185$$ $$\chi(n)$$ $$-1$$ $$-1$$ $$-1$$ ## Coefficient data For each $$n$$ we display the coefficients of the $$q$$-expansion $$a_n$$, the Satake parameters $$\alpha_p$$, and the Satake angles $$\theta_p = \textrm{Arg}(\alpha_p)$$. Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 $$n$$ $$a_n$$ $$a_n / n^{(k-1)/2}$$ $$\alpha_n$$ $$\theta_n$$ $$p$$ $$a_p$$ $$a_p / p^{(k-1)/2}$$ $$\alpha_p$$ $$\theta_p$$ $$2$$ 1.00000 1.00000 $$3$$ −1.00000 −1.00000 $$4$$ 1.00000 1.00000 $$5$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$6$$ −1.00000 −1.00000 $$7$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$8$$ 1.00000 1.00000 $$9$$ 1.00000 1.00000 $$10$$ 0 0 $$11$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$12$$ −1.00000 −1.00000 $$13$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$14$$ 0 0 $$15$$ 0 0 $$16$$ 1.00000 1.00000 $$17$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$18$$ 1.00000 1.00000 $$19$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$20$$ 0 0 $$21$$ 0 0 $$22$$ 0 0 $$23$$ −1.00000 −1.00000 $$24$$ −1.00000 −1.00000 $$25$$ −1.00000 −1.00000 $$26$$ −2.00000 −2.00000 $$27$$ −1.00000 −1.00000 $$28$$ 0 0 $$29$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$30$$ 0 0 $$31$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$32$$ 1.00000 1.00000 $$33$$ 0 0 $$34$$ 0 0 $$35$$ 0 0 $$36$$ 1.00000 1.00000 $$37$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$38$$ 0 0 $$39$$ 2.00000 2.00000 $$40$$ 0 0 $$41$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$42$$ 0 0 $$43$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$44$$ 0 0 $$45$$ 0 0 $$46$$ −1.00000 −1.00000 $$47$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$48$$ −1.00000 −1.00000 $$49$$ −1.00000 −1.00000 $$50$$ −1.00000 −1.00000 $$51$$ 0 0 $$52$$ −2.00000 −2.00000 $$53$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$54$$ −1.00000 −1.00000 $$55$$ 0 0 $$56$$ 0 0 $$57$$ 0 0 $$58$$ 0 0 $$59$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$60$$ 0 0 $$61$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$62$$ 0 0 $$63$$ 0 0 $$64$$ 1.00000 1.00000 $$65$$ 0 0 $$66$$ 0 0 $$67$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$68$$ 0 0 $$69$$ 1.00000 1.00000 $$70$$ 0 0 $$71$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$72$$ 1.00000 1.00000 $$73$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$74$$ 0 0 $$75$$ 1.00000 1.00000 $$76$$ 0 0 $$77$$ 0 0 $$78$$ 2.00000 2.00000 $$79$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$80$$ 0 0 $$81$$ 1.00000 1.00000 $$82$$ 0 0 $$83$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$84$$ 0 0 $$85$$ 0 0 $$86$$ 0 0 $$87$$ 0 0 $$88$$ 0 0 $$89$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$90$$ 0 0 $$91$$ 0 0 $$92$$ −1.00000 −1.00000 $$93$$ 0 0 $$94$$ 2.00000 2.00000 $$95$$ 0 0 $$96$$ −1.00000 −1.00000 $$97$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$98$$ −1.00000 −1.00000 $$99$$ 0 0 $$100$$ −1.00000 −1.00000 $$101$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$102$$ 0 0 $$103$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$104$$ −2.00000 −2.00000 $$105$$ 0 0 $$106$$ 0 0 $$107$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$108$$ −1.00000 −1.00000 $$109$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$110$$ 0 0 $$111$$ 0 0 $$112$$ 0 0 $$113$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$114$$ 0 0 $$115$$ 0 0 $$116$$ 0 0 $$117$$ −2.00000 −2.00000 $$118$$ 2.00000 2.00000 $$119$$ 0 0 $$120$$ 0 0 $$121$$ 1.00000 1.00000 $$122$$ 0 0 $$123$$ 0 0 $$124$$ 0 0 $$125$$ 0 0 $$126$$ 0 0 $$127$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$128$$ 1.00000 1.00000 $$129$$ 0 0 $$130$$ 0 0 $$131$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$132$$ 0 0 $$133$$ 0 0 $$134$$ 0 0 $$135$$ 0 0 $$136$$ 0 0 $$137$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$138$$ 1.00000 1.00000 $$139$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$140$$ 0 0 $$141$$ −2.00000 −2.00000 $$142$$ −2.00000 −2.00000 $$143$$ 0 0 $$144$$ 1.00000 1.00000 $$145$$ 0 0 $$146$$ 2.00000 2.00000 $$147$$ 1.00000 1.00000 $$148$$ 0 0 $$149$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$150$$ 1.00000 1.00000 $$151$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$152$$ 0 0 $$153$$ 0 0 $$154$$ 0 0 $$155$$ 0 0 $$156$$ 2.00000 2.00000 $$157$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$158$$ 0 0 $$159$$ 0 0 $$160$$ 0 0 $$161$$ 0 0 $$162$$ 1.00000 1.00000 $$163$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$164$$ 0 0 $$165$$ 0 0 $$166$$ 0 0 $$167$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$168$$ 0 0 $$169$$ 3.00000 3.00000 $$170$$ 0 0 $$171$$ 0 0 $$172$$ 0 0 $$173$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$174$$ 0 0 $$175$$ 0 0 $$176$$ 0 0 $$177$$ −2.00000 −2.00000 $$178$$ 0 0 $$179$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$180$$ 0 0 $$181$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$182$$ 0 0 $$183$$ 0 0 $$184$$ −1.00000 −1.00000 $$185$$ 0 0 $$186$$ 0 0 $$187$$ 0 0 $$188$$ 2.00000 2.00000 $$189$$ 0 0 $$190$$ 0 0 $$191$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$192$$ −1.00000 −1.00000 $$193$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$194$$ 0 0 $$195$$ 0 0 $$196$$ −1.00000 −1.00000 $$197$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$198$$ 0 0 $$199$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$200$$ −1.00000 −1.00000 $$201$$ 0 0 $$202$$ 0 0 $$203$$ 0 0 $$204$$ 0 0 $$205$$ 0 0 $$206$$ 0 0 $$207$$ −1.00000 −1.00000 $$208$$ −2.00000 −2.00000 $$209$$ 0 0 $$210$$ 0 0 $$211$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$212$$ 0 0 $$213$$ 2.00000 2.00000 $$214$$ 0 0 $$215$$ 0 0 $$216$$ −1.00000 −1.00000 $$217$$ 0 0 $$218$$ 0 0 $$219$$ −2.00000 −2.00000 $$220$$ 0 0 $$221$$ 0 0 $$222$$ 0 0 $$223$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$224$$ 0 0 $$225$$ −1.00000 −1.00000 $$226$$ 0 0 $$227$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$228$$ 0 0 $$229$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$230$$ 0 0 $$231$$ 0 0 $$232$$ 0 0 $$233$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$234$$ −2.00000 −2.00000 $$235$$ 0 0 $$236$$ 2.00000 2.00000 $$237$$ 0 0 $$238$$ 0 0 $$239$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$240$$ 0 0 $$241$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$242$$ 1.00000 1.00000 $$243$$ −1.00000 −1.00000 $$244$$ 0 0 $$245$$ 0 0 $$246$$ 0 0 $$247$$ 0 0 $$248$$ 0 0 $$249$$ 0 0 $$250$$ 0 0 $$251$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$252$$ 0 0 $$253$$ 0 0 $$254$$ 0 0 $$255$$ 0 0 $$256$$ 1.00000 1.00000 $$257$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$258$$ 0 0 $$259$$ 0 0 $$260$$ 0 0 $$261$$ 0 0 $$262$$ −2.00000 −2.00000 $$263$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$264$$ 0 0 $$265$$ 0 0 $$266$$ 0 0 $$267$$ 0 0 $$268$$ 0 0 $$269$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$270$$ 0 0 $$271$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$272$$ 0 0 $$273$$ 0 0 $$274$$ 0 0 $$275$$ 0 0 $$276$$ 1.00000 1.00000 $$277$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$278$$ 0 0 $$279$$ 0 0 $$280$$ 0 0 $$281$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$282$$ −2.00000 −2.00000 $$283$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$284$$ −2.00000 −2.00000 $$285$$ 0 0 $$286$$ 0 0 $$287$$ 0 0 $$288$$ 1.00000 1.00000 $$289$$ −1.00000 −1.00000 $$290$$ 0 0 $$291$$ 0 0 $$292$$ 2.00000 2.00000 $$293$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$294$$ 1.00000 1.00000 $$295$$ 0 0 $$296$$ 0 0 $$297$$ 0 0 $$298$$ 0 0 $$299$$ 2.00000 2.00000 $$300$$ 1.00000 1.00000 $$301$$ 0 0 $$302$$ 0 0 $$303$$ 0 0 $$304$$ 0 0 $$305$$ 0 0 $$306$$ 0 0 $$307$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$308$$ 0 0 $$309$$ 0 0 $$310$$ 0 0 $$311$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$312$$ 2.00000 2.00000 $$313$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$314$$ 0 0 $$315$$ 0 0 $$316$$ 0 0 $$317$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$318$$ 0 0 $$319$$ 0 0 $$320$$ 0 0 $$321$$ 0 0 $$322$$ 0 0 $$323$$ 0 0 $$324$$ 1.00000 1.00000 $$325$$ 2.00000 2.00000 $$326$$ 0 0 $$327$$ 0 0 $$328$$ 0 0 $$329$$ 0 0 $$330$$ 0 0 $$331$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$332$$ 0 0 $$333$$ 0 0 $$334$$ −2.00000 −2.00000 $$335$$ 0 0 $$336$$ 0 0 $$337$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$338$$ 3.00000 3.00000 $$339$$ 0 0 $$340$$ 0 0 $$341$$ 0 0 $$342$$ 0 0 $$343$$ 0 0 $$344$$ 0 0 $$345$$ 0 0 $$346$$ 0 0 $$347$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$348$$ 0 0 $$349$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$350$$ 0 0 $$351$$ 2.00000 2.00000 $$352$$ 0 0 $$353$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$354$$ −2.00000 −2.00000 $$355$$ 0 0 $$356$$ 0 0 $$357$$ 0 0 $$358$$ 2.00000 2.00000 $$359$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$360$$ 0 0 $$361$$ −1.00000 −1.00000 $$362$$ 0 0 $$363$$ −1.00000 −1.00000 $$364$$ 0 0 $$365$$ 0 0 $$366$$ 0 0 $$367$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$368$$ −1.00000 −1.00000 $$369$$ 0 0 $$370$$ 0 0 $$371$$ 0 0 $$372$$ 0 0 $$373$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$374$$ 0 0 $$375$$ 0 0 $$376$$ 2.00000 2.00000 $$377$$ 0 0 $$378$$ 0 0 $$379$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$380$$ 0 0 $$381$$ 0 0 $$382$$ 0 0 $$383$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$384$$ −1.00000 −1.00000 $$385$$ 0 0 $$386$$ −2.00000 −2.00000 $$387$$ 0 0 $$388$$ 0 0 $$389$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$390$$ 0 0 $$391$$ 0 0 $$392$$ −1.00000 −1.00000 $$393$$ 2.00000 2.00000 $$394$$ 0 0 $$395$$ 0 0 $$396$$ 0 0 $$397$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$398$$ 0 0 $$399$$ 0 0 $$400$$ −1.00000 −1.00000 $$401$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$402$$ 0 0 $$403$$ 0 0 $$404$$ 0 0 $$405$$ 0 0 $$406$$ 0 0 $$407$$ 0 0 $$408$$ 0 0 $$409$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$410$$ 0 0 $$411$$ 0 0 $$412$$ 0 0 $$413$$ 0 0 $$414$$ −1.00000 −1.00000 $$415$$ 0 0 $$416$$ −2.00000 −2.00000 $$417$$ 0 0 $$418$$ 0 0 $$419$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$420$$ 0 0 $$421$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$422$$ 0 0 $$423$$ 2.00000 2.00000 $$424$$ 0 0 $$425$$ 0 0 $$426$$ 2.00000 2.00000 $$427$$ 0 0 $$428$$ 0 0 $$429$$ 0 0 $$430$$ 0 0 $$431$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$432$$ −1.00000 −1.00000 $$433$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$434$$ 0 0 $$435$$ 0 0 $$436$$ 0 0 $$437$$ 0 0 $$438$$ −2.00000 −2.00000 $$439$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$440$$ 0 0 $$441$$ −1.00000 −1.00000 $$442$$ 0 0 $$443$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$444$$ 0 0 $$445$$ 0 0 $$446$$ 0 0 $$447$$ 0 0 $$448$$ 0 0 $$449$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$450$$ −1.00000 −1.00000 $$451$$ 0 0 $$452$$ 0 0 $$453$$ 0 0 $$454$$ 0 0 $$455$$ 0 0 $$456$$ 0 0 $$457$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$458$$ 0 0 $$459$$ 0 0 $$460$$ 0 0 $$461$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$462$$ 0 0 $$463$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$464$$ 0 0 $$465$$ 0 0 $$466$$ 0 0 $$467$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$468$$ −2.00000 −2.00000 $$469$$ 0 0 $$470$$ 0 0 $$471$$ 0 0 $$472$$ 2.00000 2.00000 $$473$$ 0 0 $$474$$ 0 0 $$475$$ 0 0 $$476$$ 0 0 $$477$$ 0 0 $$478$$ −2.00000 −2.00000 $$479$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$480$$ 0 0 $$481$$ 0 0 $$482$$ 0 0 $$483$$ 0 0 $$484$$ 1.00000 1.00000 $$485$$ 0 0 $$486$$ −1.00000 −1.00000 $$487$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$488$$ 0 0 $$489$$ 0 0 $$490$$ 0 0 $$491$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$492$$ 0 0 $$493$$ 0 0 $$494$$ 0 0 $$495$$ 0 0 $$496$$ 0 0 $$497$$ 0 0 $$498$$ 0 0 $$499$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$500$$ 0 0 $$501$$ 2.00000 2.00000 $$502$$ 0 0 $$503$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$504$$ 0 0 $$505$$ 0 0 $$506$$ 0 0 $$507$$ −3.00000 −3.00000 $$508$$ 0 0 $$509$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$510$$ 0 0 $$511$$ 0 0 $$512$$ 1.00000 1.00000 $$513$$ 0 0 $$514$$ 0 0 $$515$$ 0 0 $$516$$ 0 0 $$517$$ 0 0 $$518$$ 0 0 $$519$$ 0 0 $$520$$ 0 0 $$521$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$522$$ 0 0 $$523$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$524$$ −2.00000 −2.00000 $$525$$ 0 0 $$526$$ 0 0 $$527$$ 0 0 $$528$$ 0 0 $$529$$ 1.00000 1.00000 $$530$$ 0 0 $$531$$ 2.00000 2.00000 $$532$$ 0 0 $$533$$ 0 0 $$534$$ 0 0 $$535$$ 0 0 $$536$$ 0 0 $$537$$ −2.00000 −2.00000 $$538$$ 0 0 $$539$$ 0 0 $$540$$ 0 0 $$541$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$542$$ 0 0 $$543$$ 0 0 $$544$$ 0 0 $$545$$ 0 0 $$546$$ 0 0 $$547$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$548$$ 0 0 $$549$$ 0 0 $$550$$ 0 0 $$551$$ 0 0 $$552$$ 1.00000 1.00000 $$553$$ 0 0 $$554$$ 2.00000 2.00000 $$555$$ 0 0 $$556$$ 0 0 $$557$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$558$$ 0 0 $$559$$ 0 0 $$560$$ 0 0 $$561$$ 0 0 $$562$$ 0 0 $$563$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$564$$ −2.00000 −2.00000 $$565$$ 0 0 $$566$$ 0 0 $$567$$ 0 0 $$568$$ −2.00000 −2.00000 $$569$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$570$$ 0 0 $$571$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$572$$ 0 0 $$573$$ 0 0 $$574$$ 0 0 $$575$$ 1.00000 1.00000 $$576$$ 1.00000 1.00000 $$577$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$578$$ −1.00000 −1.00000 $$579$$ 2.00000 2.00000 $$580$$ 0 0 $$581$$ 0 0 $$582$$ 0 0 $$583$$ 0 0 $$584$$ 2.00000 2.00000 $$585$$ 0 0 $$586$$ 0 0 $$587$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$588$$ 1.00000 1.00000 $$589$$ 0 0 $$590$$ 0 0 $$591$$ 0 0 $$592$$ 0 0 $$593$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$594$$ 0 0 $$595$$ 0 0 $$596$$ 0 0 $$597$$ 0 0 $$598$$ 2.00000 2.00000 $$599$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$600$$ 1.00000 1.00000 $$601$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$602$$ 0 0 $$603$$ 0 0 $$604$$ 0 0 $$605$$ 0 0 $$606$$ 0 0 $$607$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$608$$ 0 0 $$609$$ 0 0 $$610$$ 0 0 $$611$$ −4.00000 −4.00000 $$612$$ 0 0 $$613$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$614$$ 0 0 $$615$$ 0 0 $$616$$ 0 0 $$617$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$618$$ 0 0 $$619$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$620$$ 0 0 $$621$$ 1.00000 1.00000 $$622$$ 2.00000 2.00000 $$623$$ 0 0 $$624$$ 2.00000 2.00000 $$625$$ 1.00000 1.00000 $$626$$ 0 0 $$627$$ 0 0 $$628$$ 0 0 $$629$$ 0 0 $$630$$ 0 0 $$631$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$632$$ 0 0 $$633$$ 0 0 $$634$$ 0 0 $$635$$ 0 0 $$636$$ 0 0 $$637$$ 2.00000 2.00000 $$638$$ 0 0 $$639$$ −2.00000 −2.00000 $$640$$ 0 0 $$641$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$642$$ 0 0 $$643$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$644$$ 0 0 $$645$$ 0 0 $$646$$ 0 0 $$647$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$648$$ 1.00000 1.00000 $$649$$ 0 0 $$650$$ 2.00000 2.00000 $$651$$ 0 0 $$652$$ 0 0 $$653$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$654$$ 0 0 $$655$$ 0 0 $$656$$ 0 0 $$657$$ 2.00000 2.00000 $$658$$ 0 0 $$659$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$660$$ 0 0 $$661$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$662$$ 0 0 $$663$$ 0 0 $$664$$ 0 0 $$665$$ 0 0 $$666$$ 0 0 $$667$$ 0 0 $$668$$ −2.00000 −2.00000 $$669$$ 0 0 $$670$$ 0 0 $$671$$ 0 0 $$672$$ 0 0 $$673$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$674$$ 0 0 $$675$$ 1.00000 1.00000 $$676$$ 3.00000 3.00000 $$677$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$678$$ 0 0 $$679$$ 0 0 $$680$$ 0 0 $$681$$ 0 0 $$682$$ 0 0 $$683$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$684$$ 0 0 $$685$$ 0 0 $$686$$ 0 0 $$687$$ 0 0 $$688$$ 0 0 $$689$$ 0 0 $$690$$ 0 0 $$691$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$692$$ 0 0 $$693$$ 0 0 $$694$$ 2.00000 2.00000 $$695$$ 0 0 $$696$$ 0 0 $$697$$ 0 0 $$698$$ −2.00000 −2.00000 $$699$$ 0 0 $$700$$ 0 0 $$701$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$702$$ 2.00000 2.00000 $$703$$ 0 0 $$704$$ 0 0 $$705$$ 0 0 $$706$$ 0 0 $$707$$ 0 0 $$708$$ −2.00000 −2.00000 $$709$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$710$$ 0 0 $$711$$ 0 0 $$712$$ 0 0 $$713$$ 0 0 $$714$$ 0 0 $$715$$ 0 0 $$716$$ 2.00000 2.00000 $$717$$ 2.00000 2.00000 $$718$$ 0 0 $$719$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$720$$ 0 0 $$721$$ 0 0 $$722$$ −1.00000 −1.00000 $$723$$ 0 0 $$724$$ 0 0 $$725$$ 0 0 $$726$$ −1.00000 −1.00000 $$727$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$728$$ 0 0 $$729$$ 1.00000 1.00000 $$730$$ 0 0 $$731$$ 0 0 $$732$$ 0 0 $$733$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$734$$ 0 0 $$735$$ 0 0 $$736$$ −1.00000 −1.00000 $$737$$ 0 0 $$738$$ 0 0 $$739$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$740$$ 0 0 $$741$$ 0 0 $$742$$ 0 0 $$743$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$744$$ 0 0 $$745$$ 0 0 $$746$$ 0 0 $$747$$ 0 0 $$748$$ 0 0 $$749$$ 0 0 $$750$$ 0 0 $$751$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$752$$ 2.00000 2.00000 $$753$$ 0 0 $$754$$ 0 0 $$755$$ 0 0 $$756$$ 0 0 $$757$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$758$$ 0 0 $$759$$ 0 0 $$760$$ 0 0 $$761$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$762$$ 0 0 $$763$$ 0 0 $$764$$ 0 0 $$765$$ 0 0 $$766$$ 0 0 $$767$$ −4.00000 −4.00000 $$768$$ −1.00000 −1.00000 $$769$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$770$$ 0 0 $$771$$ 0 0 $$772$$ −2.00000 −2.00000 $$773$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$774$$ 0 0 $$775$$ 0 0 $$776$$ 0 0 $$777$$ 0 0 $$778$$ 0 0 $$779$$ 0 0 $$780$$ 0 0 $$781$$ 0 0 $$782$$ 0 0 $$783$$ 0 0 $$784$$ −1.00000 −1.00000 $$785$$ 0 0 $$786$$ 2.00000 2.00000 $$787$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$788$$ 0 0 $$789$$ 0 0 $$790$$ 0 0 $$791$$ 0 0 $$792$$ 0 0 $$793$$ 0 0 $$794$$ 2.00000 2.00000 $$795$$ 0 0 $$796$$ 0 0 $$797$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$798$$ 0 0 $$799$$ 0 0 $$800$$ −1.00000 −1.00000 $$801$$ 0 0 $$802$$ 0 0 $$803$$ 0 0 $$804$$ 0 0 $$805$$ 0 0 $$806$$ 0 0 $$807$$ 0 0 $$808$$ 0 0 $$809$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$810$$ 0 0 $$811$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$812$$ 0 0 $$813$$ 0 0 $$814$$ 0 0 $$815$$ 0 0 $$816$$ 0 0 $$817$$ 0 0 $$818$$ −2.00000 −2.00000 $$819$$ 0 0 $$820$$ 0 0 $$821$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$822$$ 0 0 $$823$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$824$$ 0 0 $$825$$ 0 0 $$826$$ 0 0 $$827$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$828$$ −1.00000 −1.00000 $$829$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$830$$ 0 0 $$831$$ −2.00000 −2.00000 $$832$$ −2.00000 −2.00000 $$833$$ 0 0 $$834$$ 0 0 $$835$$ 0 0 $$836$$ 0 0 $$837$$ 0 0 $$838$$ 0 0 $$839$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$840$$ 0 0 $$841$$ 1.00000 1.00000 $$842$$ 0 0 $$843$$ 0 0 $$844$$ 0 0 $$845$$ 0 0 $$846$$ 2.00000 2.00000 $$847$$ 0 0 $$848$$ 0 0 $$849$$ 0 0 $$850$$ 0 0 $$851$$ 0 0 $$852$$ 2.00000 2.00000 $$853$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$854$$ 0 0 $$855$$ 0 0 $$856$$ 0 0 $$857$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$858$$ 0 0 $$859$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$860$$ 0 0 $$861$$ 0 0 $$862$$ 0 0 $$863$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$864$$ −1.00000 −1.00000 $$865$$ 0 0 $$866$$ 0 0 $$867$$ 1.00000 1.00000 $$868$$ 0 0 $$869$$ 0 0 $$870$$ 0 0 $$871$$ 0 0 $$872$$ 0 0 $$873$$ 0 0 $$874$$ 0 0 $$875$$ 0 0 $$876$$ −2.00000 −2.00000 $$877$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$878$$ 0 0 $$879$$ 0 0 $$880$$ 0 0 $$881$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$882$$ −1.00000 −1.00000 $$883$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$884$$ 0 0 $$885$$ 0 0 $$886$$ −2.00000 −2.00000 $$887$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$888$$ 0 0 $$889$$ 0 0 $$890$$ 0 0 $$891$$ 0 0 $$892$$ 0 0 $$893$$ 0 0 $$894$$ 0 0 $$895$$ 0 0 $$896$$ 0 0 $$897$$ −2.00000 −2.00000 $$898$$ 0 0 $$899$$ 0 0 $$900$$ −1.00000 −1.00000 $$901$$ 0 0 $$902$$ 0 0 $$903$$ 0 0 $$904$$ 0 0 $$905$$ 0 0 $$906$$ 0 0 $$907$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$908$$ 0 0 $$909$$ 0 0 $$910$$ 0 0 $$911$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$912$$ 0 0 $$913$$ 0 0 $$914$$ 0 0 $$915$$ 0 0 $$916$$ 0 0 $$917$$ 0 0 $$918$$ 0 0 $$919$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$920$$ 0 0 $$921$$ 0 0 $$922$$ 0 0 $$923$$ 4.00000 4.00000 $$924$$ 0 0 $$925$$ 0 0 $$926$$ 0 0 $$927$$ 0 0 $$928$$ 0 0 $$929$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$930$$ 0 0 $$931$$ 0 0 $$932$$ 0 0 $$933$$ −2.00000 −2.00000 $$934$$ 0 0 $$935$$ 0 0 $$936$$ −2.00000 −2.00000 $$937$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$938$$ 0 0 $$939$$ 0 0 $$940$$ 0 0 $$941$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$942$$ 0 0 $$943$$ 0 0 $$944$$ 2.00000 2.00000 $$945$$ 0 0 $$946$$ 0 0 $$947$$ 2.00000 2.00000 1.00000 $$0$$ 1.00000 $$0$$ $$948$$ 0 0 $$949$$ −4.00000 −4.00000 $$950$$ 0 0 $$951$$ 0 0 $$952$$ 0 0 $$953$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$954$$ 0 0 $$955$$ 0 0 $$956$$ −2.00000 −2.00000 $$957$$ 0 0 $$958$$ 0 0 $$959$$ 0 0 $$960$$ 0 0 $$961$$ 1.00000 1.00000 $$962$$ 0 0 $$963$$ 0 0 $$964$$ 0 0 $$965$$ 0 0 $$966$$ 0 0 $$967$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$968$$ 1.00000 1.00000 $$969$$ 0 0 $$970$$ 0 0 $$971$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$972$$ −1.00000 −1.00000 $$973$$ 0 0 $$974$$ 0 0 $$975$$ −2.00000 −2.00000 $$976$$ 0 0 $$977$$ 0 0 1.00000i $$-0.5\pi$$ 1.00000i $$0.5\pi$$ $$978$$ 0 0 $$979$$ 0 0 $$980$$ 0 0 $$981$$ 0 0 $$982$$ 2.00000 2.00000 $$983$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$984$$ 0 0 $$985$$ 0 0 $$986$$ 0 0 $$987$$ 0 0 $$988$$ 0 0 $$989$$ 0 0 $$990$$ 0 0 $$991$$ 0 0 1.00000 $$0$$ −1.00000 $$\pi$$ $$992$$ 0 0 $$993$$ 0 0 $$994$$ 0 0 $$995$$ 0 0 $$996$$ 0 0 $$997$$ −2.00000 −2.00000 −1.00000 $$\pi$$ −1.00000 $$\pi$$ $$998$$ 0 0 $$999$$ 0 0 Display $$a_p$$ with $$p$$ up to: 50 250 1000 Display $$a_n$$ with $$n$$ up to: 50 250 1000 ## Twists By twisting character Char Parity Ord Type Twist Min Dim 1.1 even 1 trivial 276.1.h.b.275.1 yes 1 3.2 odd 2 276.1.h.a.275.1 1 4.3 odd 2 276.1.h.a.275.1 1 12.11 even 2 RM 276.1.h.b.275.1 yes 1 23.22 odd 2 CM 276.1.h.b.275.1 yes 1 69.68 even 2 276.1.h.a.275.1 1 92.91 even 2 276.1.h.a.275.1 1 276.275 odd 2 CM 276.1.h.b.275.1 yes 1 By twisted newform Twist Min Dim Char Parity Ord Type 276.1.h.a.275.1 1 3.2 odd 2 276.1.h.a.275.1 1 4.3 odd 2 276.1.h.a.275.1 1 69.68 even 2 276.1.h.a.275.1 1 92.91 even 2 276.1.h.b.275.1 yes 1 1.1 even 1 trivial 276.1.h.b.275.1 yes 1 12.11 even 2 RM 276.1.h.b.275.1 yes 1 23.22 odd 2 CM 276.1.h.b.275.1 yes 1 276.275 odd 2 CM
{}
Seaborn Bar Plot - Tutorial and Examples # Seaborn Bar Plot - Tutorial and Examples ### Introduction Seaborn is one of the most widely used data visualization libraries in Python, as an extension to Matplotlib. It offers a simple, intuitive, yet highly customizable API for data visualization. In this tutorial, we'll take a look at how to plot a Bar Plot in Seaborn. Bar graphs display numerical quantities on one axis and categorical variables on the other, letting you see how many occurrences there are for the different categories. Bar charts can be used for visualizing a time series, as well as just categorical data. ### Plot a Bar Plot in Seaborn Plotting a Bar Plot in Seaborn is as easy as calling the barplot() function on the sns instance, and passing in the categorical and continuous variables that we'd like to visualize: import matplotlib.pyplot as plt import seaborn as sns sns.set_style('darkgrid') x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot(x, y) plt.show() Here, we've got a few categorical variables in a list - A, B and C. We've also got a couple of continuous variables in another list - 1, 5 and 3. The relationship between these two is then visualized in a Bar Plot by passing these two lists to sns.barplot(). This results in a clean and simple bar graph: Though, more often than not, you'll be working with datasets that contain much more data than this. Sometimes, operations are applied to this data, such as ranging or counting certain occurrences. Whenever you're dealing with means of data, you'll have some error padding that can arise from it. Thankfully, Seaborn has us covered, and applies error bars for us automatically, as it by default calculates the mean of the data we provide. Let's import the classic Titanic Dataset and visualize a Bar Plot with data from there: import matplotlib.pyplot as plt import seaborn as sns # Set Seaborn style sns.set_style('darkgrid') # Import Data # Construct plot sns.barplot(x = "sex", y = "survived", data = titanic_dataset) plt.show() This time around, we've assigned x and y to the sex and survived columns of the dataset, instead of the hard-coded lists. If we print the head of the dataset: print(titanic_dataset.head()) We're greeted with: survived pclass sex age sibsp parch fare ... 0 0 3 male 22.0 1 0 7.2500 ... 1 1 1 female 38.0 1 0 71.2833 ... 2 1 3 female 26.0 0 0 7.9250 ... 3 1 1 female 35.0 1 0 53.1000 ... 4 0 3 male 35.0 0 0 8.0500 ... [5 rows x 15 columns] Make sure you match the names of these features when you assign x and y variables. Finally, we use the data argument and pass in the dataset we're working with and from which the features are extracted from. This results in: ### Plot a Horizontal Bar Plot in Seaborn To plot a Bar Plot horizontally, instead of vertically, we can simply switch the places of the x and y variables. This will make the categorical variable be plotted on the Y-axis, resulting in a horizontal plot: import matplotlib.pyplot as plt import seaborn as sns x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot(y, x) plt.show() This results in: Going back to the Titanic example, this is done in much the same way: import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "survived", y = "class", data = titanic_dataset) plt.show() Which results in: ### Change Bar Plot Color in Seaborn Changing the color of the bars is fairly easy. The color argument accepts a Matplotlib color and applies it to all elements. Let's change them to blue: import matplotlib.pyplot as plt import seaborn as sns x = ['A', 'B', 'C'] y = [1, 5, 3] sns.barplot(x, y, color='blue') plt.show() This results in: ## Free eBook: Git Essentials Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it! Or, better yet, you can set the palette argument, which accepts a wide variety of palettes. A pretty common one is hls: import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "embark_town", y = "survived", palette = 'hls', data = titanic_dataset) plt.show() This results in: ### Plot Grouped Bar Plot in Seaborn Grouping Bars in plots is a common operation. Say you wanted to compare some common data, like, the survival rate of passengers, but would like to group them with some criteria. We might want to visualize the relationship of passengers who survived, segregated into classes (first, second and third), but also factor in which town they embarked from. This is a fair bit of information in a plot, and it can easily all be put into a simple Bar Plot. To group bars together, we use the hue argument. Technically, as the name implies, the hue argument tells Seaborn how to color the bars, but in the coloring process, it groups together relevant data. Let's take a look at the example we've just discussed: import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "class", y = "survived", hue = "embark_town", data = titanic_dataset) plt.show() This results in: Now, the error bars for the Queenstown data are pretty large. This indicates that the data on passengers who survived, and embarked from Queenstown varies a lot for the first and second class. ### Ordering Grouped Bars in a Bar Plot with Seaborn You can change the order of the bars from the default order (whatever Seaborn thinks makes most sense) into something you'd like to highlight or explore. This is done via the order argument, which accepts a list of the values and the order you'd like to put them in. For example, so far, it ordered the classes from the first to the third. What if we'd like to do it the other way around? import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "class", y = "survived", hue = "embark_town", order = ["Third", "Second", "First"], data = titanic_dataset) plt.show() Running this code results in: ### Change Confidence Interval on Seaborn Bar Plot You can also easily fiddle around with the confidence interval by setting the ci argument. For example, you can turn it off, by setting it to None, or use standard deviation instead of the mean by setting sd, or even put a cap size on the error bars for aesthetic purposes by setting capsize. Let's play around with the confidence interval attribute a bit: import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "class", y = "survived", hue = "embark_town", ci = None, data = titanic_dataset) plt.show() This now removes our error bars from before: Or, we could use standard deviation for the error bars and set a cap size: import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x = "class", y = "survived", hue = "who", ci = "sd", capsize = 0.1, data = titanic_dataset) plt.show() ### Conclusion In this tutorial, we've gone over several ways to plot a Bar Plot using Seaborn and Python. We've started with simple plots, and horizontal plots, and then continued to customize them. We've covered how to change the colors of the bars, group them together, order them and change the confidence interval. If you're interested in Data Visualization and don't know where to start, make sure to check out our bundle of books on Data Visualization in Python: Data Visualization in Python with Matplotlib and Pandas is a book designed to take absolute beginners to Pandas and Matplotlib, with basic Python knowledge, and allow them to build a strong foundation for advanced work with these libraries - from simple plots to animated 3D plots with interactive buttons. It serves as an in-depth guide that'll teach you everything you need to know about Pandas and Matplotlib, including how to construct plot types that aren't built into the library itself. Data Visualization in Python, a book for beginner to intermediate Python developers, guides you through simple data manipulation with Pandas, covers core plotting libraries like Matplotlib and Seaborn, and shows you how to take advantage of declarative and experimental libraries like Altair. More specifically, over the span of 11 chapters this book covers 9 Python libraries: Pandas, Matplotlib, Seaborn, Bokeh, Altair, Plotly, GGPlot, GeoPandas, and VisPy. It serves as a unique, practical guide to Data Visualization, in a plethora of tools you might use in your career. Last Updated: February 21st, 2023 Get tutorials, guides, and dev jobs in your inbox. David LandupAuthor Entrepreneur, Software and Machine Learning Engineer, with a deep fascination towards the application of Computation and Deep Learning in Life Sciences (Bioinformatics, Drug Discovery, Genomics), Neuroscience (Computational Neuroscience), robotics and BCIs. Great passion for accessible education and promotion of reason, science, humanism, and progress. Project ### Data Visualization in Python: Visualizing EEG Brainwave Data # python# matplotlib# seaborn# data visualization Electroencephalography (EEG) is the process of recording an individual's brain activity - from a macroscopic scale. It's a non-invasive (external) procedure and collects aggregate, not... Details Project ### Data Visualization in Python: The Collatz Conjecture # python# matplotlib# data visualization The Collatz Conjecture is a notorious conjecture in mathematics. A conjecture is a conclusion based on existing evidence - however, a conjecture cannot be proven.... Details
{}
# Demystifying the tensor product It seems to me, through my mathematical immaturity, that the tensor product seems to beg for more well-definition. I am working in vector spaces (so we always have a free module) and here is what my professor has shown me thus far. We can define the tensor product of two maps (multi-linear) as follows. Let $$S \in \mathcal{L}(V_1, \dots, V_n; \mathcal{L}(W;,Z))$$ and $$T \in \mathcal{L}(V_{n+1}, \dots , V_{n+m};W)$$, We define $$S \otimes T \in \mathcal{L}(V_1, \dots , V_{n+m};Z)$$ by setting $$S \otimes T(v_1, \dots ,v_{n+m})=S(v_1, \dots, v_n)[T(v_{n+1}, \dots , v_{n+m})]$$ Now, we do have $$\mathcal{L}(V_1, \dots , V_{n+m};Z) \cong V^*_1 \otimes \dots \otimes V^*_{n+m} \otimes Z$$ I believe. So it is, up to isomorphism, a tensor but not, itself, a tensor. Further, suppose that $$V_1, \dots , V_n$$ are vector spaces. We define the tensor product $$V_1 \otimes \dots \otimes V_n = \mathcal{L}(V^*_1, \dots V^*_n; \mathbb{F})$$ Since we regard $$V$$ and $$V^{**}$$ to be identified we have $$v_1 \otimes \dots \otimes v_n \in V_1 \otimes \dots \otimes V_n$$ defined $$(v_1 \otimes \dots \otimes v_n)(L_1, \dots L_n)=L_1(v_1)\dots L_n(v_n)$$ Finally, we have defined a tensor of type $$m,n$$ to be a multi-linear map from $$\underbrace{V^* \times \dots \times V^*}_{m \text{ times}}\times \underbrace{V \times \dots \times V}_{n \text{ times}} \to \mathbb{F}$$. # problem So it seems to me that tensor products do not always produce tensors? That a tensor product sometimes is and sometimes is not a map to the field? Which makes me wonder how we can consider the idea to be well-defined? I have to be told by some to think about it in terms of the universal property, i.e., it takes multi-linear maps to linear ones but that isn't as illuminating as some may think. How is one to think about this product and these objects? Thanks for your help! • I think the UMP approach is the way to go. Since for any bilinear $B:V\times W\to Z$ there is a unique linear $\phi: V\otimes W\to Z$ s.t. $\phi \circ \otimes=B$, the "right" way to define tensor multiplication follows readily. – Matematleta Jun 25 '16 at 22:11 • Could you expand? Perhaps with an example – RhythmInk Jun 25 '16 at 22:12 • check math.stackexchange.com/questions/1750015/… for humble approach – janmarqz Jun 26 '16 at 15:20 My confusion, now all cleared up years later, was one of notation. When we are being slightly less precise we can define the tensor product of maps say from $$f \otimes g: V_1 \otimes V_2 \to V'_1 \otimes V'_2$$ which is defined by $$f$$ acting on the first coordinate and $$g$$ on the second. For all those wondering, this is not a tensor. This is (slightly lazy) notation demonstrating how we get certain maps once we have taken the tensor product of vector spaces (modules, more generally). $$f \otimes g$$ is not a tensor but acts on them. There is a reason we abuse notation like this as there is a correspondence of sorts between this tensor product of maps'' and tensor products between spaces of maps.
{}
## Random connected area Published on Saturday, 8th February 2020, 04:00 pm; Solved by 167 ### Problem 701 Consider a rectangle made up of $W \times H$ square cells each with area 1. Each cell is independently coloured black with probability 0.5 otherwise white. Black cells sharing an edge are assumed to be connected. Consider the maximum area of connected cells. Define $E(W,H)$ to be the expected value of this maximum area. For example, $E(2,2)=1.875$, as illustrated below. You are also given $E(4, 4) = 5.76487732$, rounded to 8 decimal places. Find $E(7, 7)$, rounded to 8 decimal places.
{}
While most people will never need to open a .lnk file to edit it, there may be rare occasions when it is necessary or desired. But how do you open and edit a shortcut file? Today’s SuperUser Q&A post has the answers. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. ## The Question SuperUser reader Jez wants to know how to open .lnk files to view the ‘contents’ and edit them if needed: A .lnk file in Windows is an actual file intended to be a shortcut to another file, but I really do want to view the contents of the .lnk file itself. However, I am finding it literally impossible to do so. No matter what I try, my applications are opening the contents of the file it points to (drag and drop into a text or hex editor, File –> Open from a text or hex editor, etc.). Is there any way I can get a program to actually open the .lnk file itself instead of the file it points to? Is there a way for Jez to actually open .lnk files and edit them? SuperUser contributors and31415, Julian Knight, and Vinayak have the answer for us. First up, and31415: Using HxD Hex Editor, you can open .lnk files just fine, as long as you do not drag and drop them. As a workaround, open a command prompt and rename the .lnk file with a different, non-existent extension such as .lne: • cd /d “X:\Folder\containing\the\shortcut” ren “some shortcut.lnk” “some shortcut.lne” You will then be able to treat the shortcut just like a regular file. When you are done, make sure to rename the file with the original .lnk extension to restore its usual functionality. Followed by the answer from Julian Knight: The whole point of a .lnk file is for Windows to treat it as a link to another file, so it should be hard to edit! Perhaps it would help if you described why you want to edit it. You can change the settings of a .lnk file by right-clicking and choosing Properties. If you really want to edit it, you need a special tool. There are a few of these around including: I have not tried any of these, just Googled them. You can also edit the properties via PowerShell (from this previous answer on Stack Overflow): • Copy-Item $sourcepath$destination  ## Get the lnk we want to use as a template $shell = New-Object -COM WScript.Shell$shortcut = $shell.CreateShortcut($destination)  ## Open the lnk $shortcut.TargetPath = “C:\path\to\new\exe.exe” ## Make changes$shortcut.Description = “Our new link”  ## This is the “Comment” field \$shortcut.Save()  ## Save Since this uses the Shell COM object, you could also do this with WSH or even VBA in Office! And finally, the answer from Vinayak: I have tried this and it works for me on Windows 8.1: • Just drag and drop them into the Notepad window. If you open them via the Open dialog, Notepad will open the exe file pointed to by the .lnk file. Opening .lnk files in HxD Hex Editor: • Open them as you would any file using the Open dialog (File –> Open). Opening .lnk files using the command prompt: • Navigate to the folder containing the .lnk files and type the command: “TYPE SHORTCUTNAME.LNK”. Opening .lnk files in just about any program: • Start the command prompt, navigate to the folder where the program is located, use the command: PROGRAM_NAME.EXE “path to LNK file”. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.
{}
Surface current density problem 1. Sep 9, 2012 ppoonamk 1. The problem statement, all variables and given/known data A static surface current density Js(x,y) is confined to a narrow strip in the xy-plane In this static problem ∇ ⋅Js = 0. Show that the line-integral of Js along any cross-section of the strip will yield the same value for the total current I. (The direction of dl in these 2D line-integrals is perpendicular to the line segment; these are not ordinary line-integrals but rather surface integrals in which the third dimension z has shrunk to zero.) Show that I =∫ Jsxdy =∫ Jsydx, where the integrals are over the width of the strip at any desired cross-section. 2. Relevant equations Gauss’s theorem: ∫∫ (∇ ⋅Js)dxdy =∫ Js ⋅dl 3. The attempt at a solution I don not understand this line The direction of dl in these 2D line-integrals is perpendicular to the line segment; these are not ordinary line-integrals but rather surface integrals in which the third dimension z has shrunk to zero. The RHS of Gauss’s theorem = I LHS: Integrating from x1 to x2 where x2-x1 =Δx, dl is perpendicular to the line segment. ∫ Js ⋅dl= JsyΔx similarly Integrating from y1 to y2 where y2-y1 =Δy, dl is perpendicular to the line segment. ∫ Js ⋅dl= JsxΔy Is this right? How do I prove the 2nd part of the question? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Sep 11, 2012 susskind_leon You need to be careful with your dl. ∫∫ (∇ ⋅Js)dxdy =∫ Js ⋅dl The way you wrote Gauss' theorem down (in 2d) suggests that dl is tangential to the circumference of the cross section. Given that on the LHS you have ∇ ⋅Js = 0 tells you that I=0, which is indeed true if you define I as the current that goes AROUND your cross-section. However, what you are actually looking for is ∫ Js ⋅ n dA, n being perpendicular to the surface. So this is actually a tricky question for which you need to think a little bit about the divergence theorem in 2d and 3d and so on. Also, instead of Gauss' theorem, you could use Stokes' theorem. But you need to think about stuff very carefully.
{}
## Past Talks Ryan Lukeman, "A Numerical Investigation of a Two-Layer Frontal Geostrophic Model of the Antarctic Circumpolar Current", Wednesday, August 24 Abstract: The numerical simulation of oceanic flow is a primary research tool for understanding the physical properties of the world ocean. These models range from complex, high-resolution models to simplified models in idealized domains. In the spirit of the latter, a two-layer frontal geostrophic model is discussed for a wind-driven circumpolar flow via an asymptotic reduction of the shallow-water equations. The model is implemented using the finite element method via the software package FEMLAB. The model is used to study the meridional balance, lower-layer outcropping, and parameter variation in the Antarctic Circumpolar Current, the dominant oceanic flow in the Southern Ocean. The effects of varying resolution and timestepping parameters is discussed.  Experiments are performed in a number of domain and bottom topography regimes to examine the effects of the Drake Passage and a topographic ridge on the meridional balance and transport that prevails in the current. The results support a mechanism of balance by which momentum imparted by winds at the surface is transferred to the lower layer via eddies and dissipated by the ocean bottom. Garbor Lukacs , "Introduction To Topological Groups", Monday, August 22 Abstract: We present some basic, classical results concerning topological groups, with focus on locally compact abelian and compact groups, and their duality theories. Our aim is to make the talk as self-contained as possible. Michael Dowd , "Fitting  Dynamic Models to Data", Tuesday, August 16 Abstract: In this talk I will discuss the estimation of the state and parameters of a dynamic system from time series observations. The dynamic system considered here is governed by system of coupled nonlinear differential equation based models, numerical implemented as (stochastic) difference equations. The specific example considered here is an stochastic ecological model for population dynamics which exhibits interesting dynamical behaviour, such as a Hopf bifurcation. It will be illustrated how noisy and incomplete observations of the system state can be used for online estimation of this system using a statistical time series framework. C. C. A. Sastri , "Unobserved Outcomes and Unobserved Probability", Friday, July 15 Abstract: Suppose that an experiment with an unknown number of possible outcomes is performed and that these outcomes occur according to some random mechanism. Suppose that n independent trials are carried out and that N distinct outcomes have been observed. We attempt to answer the following questions: What is the probability that, on the next trial, an outcome not observed before occurs?  (This is called the problem of unobserved probability.) What is the total number of outcomes not observed? (Equivalently, what is the total number of outcomes of the experiment?) This second problem has a long history going back to Turing and is, apart from its mathematical interest, important  in many areas such as biology (species sampling), intelligence gathering, numismatics, and literary scholarship. We'll give a brief survey of past work and also discuss recent joint work with Alberto Gandolfi in which a Bayes-like estimator for the number of unobserved outcomes is derived. Such an estimator has the advantage over the existing estimators -- due to Chao and Lee and others -- in that, modulo the fact that Turing's ansatz is used (it is used by everyone else as well), it is derived from first principles, without any ad hoc assumptions, and includes previous estimators as special cases. Geoff Cruttwell , "A Category Theory View of  Products and Sums", Monday, July 11 Abstract: The idea of taking sums or products of sets has been well-known for quite some time.  Looking at these concepts from a category theory point of view, however, demonstrates an interesting relationship between them. Hopefully this talk will demonstrate an instance of why taking the categorical viewpoint can provide new insight into existing ideas.  No knowledge of category theory required. Le Bao , "Model  Based on Clustering  Among Codon Sites", Tuesday, June 28 Abstract: I will introduce a clustering method in phylogeny which can identify the class labels for site-models. Bielawski, Hong and I name it by MBC (model based clustering), because it is based on the likelihood. We also extend the existing codon models by different combinations of parameters and use LRT to select the best model for real data analysis. These methods are then applied on simulated data and real data for comparison. Paul Sheridan , "Constructing Confidence Regions for Evolutionary Tree Topologies", Friday, June 24 Abstract: I will talk about how to quantify uncertainty when estimating evolutionary tree topologies using generalized least squares. My intent is to explain the background for my masters thesis, so don't expect anything overly technical. Caroline Adlam , "The Kepler Problem and Superintegrability", Friday, June 17 Abstract: The notion of a completely integrable Hamiltonian system introduced in the 19th century by Joseph Liouville (1809-1882) has seen many interesting developments since then. I will discuss a generalization of this concept and explain when a completely integrable Hamiltonian system is said to be superintegrable. The classical Kepler problem will be used as an illustrative example to explain these and other properties of Hamiltonian systems. Huaichun Wang , "Quantifying Codon Usage Bias of  the Genes", Thursday, June 9 Abstract: The genetic code is redundant because there are 61 sense codons codingfor 20 amino acids. Codons that encode the same amino acid are called synonymous codons. Eighteen of the 20 types of amino acids have synonymous codons. However, the choice of synonymous codons is not random and they differ among genes within a genome, and among genomes. I will show some statistical measures to qualify the codon usage bias, ranging from purely mathematical terms, such as Shannons entropy and the effective number of codons, to purely biological model, such as the codon adaptation index. Steven Noble , "Newton Polygons and Irreducible Polynomials", Wednesday, June 1 Abstract: In this talk I will introduce Newton Polygons, discuss some of their properties, and show how they can be used to create a general Eisenstein criterion. The usual Eisenstein condition says that a n-th degree polynomial f(x) with integer coefficients cannot be factored into lower degree polynomials with integer coefficients if there exists a prime p such that p does not divide the coefficient of x^n, p does divide all the other coefficients, and p^2 does not divide the constant term.  This condition can be phrased in terms of Newton Polygons and even be generalized to allow p to divide the constant term more than once. Jihua Wu , "Some Problems about t Distribution, Dirichlet Distribution and F Distribution", Monday, May 30 Abstract: When the variance of a normal population is unknown, as to the test of mean for the population, we usually use t test. In fact, this method of test does not require that the population is normal distribution; even the independence among the samples is unnecessary. The purpose of this paper is to find the applying region of the t test, i.e. the necessary and sufficient conditions for a two dimension variable forms a test, which possesses a t distribution. As to the similar purpose, we also give the necessary and sufficient conditions for two tests, one has a Dirichlet distribution, and the other has an F distribution respect. Pat Keast , "Integration over the Hypercube Using Lattice Methods", Thursday, May 26 Abstract: In low dimensions there are many options for performing numerical integration, mostly based on methods designed to be exact for polynomials. Reliable software packages exist which automatically choose sampling points adapted to the integrand. But when the dimension goes above 8 or 10, these methods become extremely computationally expensive. Traditionally, for higher dimensions, the method of choice has been some variant of Monte Carlo. In the past 30 years, however, there has been a growing interest in what are called {\em Pseudo Monte Carlo methods}. The talk will give an introduction to these methods, and describe more recent work on a particular class of these methods called Lattice Rules. Robert Milson , "Algebraic Solutions of the Schrödinger Equation", Wednesday, May 18 Abstract: Mathematically, the key problem in classical quantum mechanics is the diagonalization of a Hermitian operator.  The difficulty is that the operators are, usually, second-order differential operators, with an infinite dimensional underlying state space.  Nonetheless, many important models admit a polynomial basis which reduces the operator to an upper triangular matrix, and thereby allow for an exact calculation of the spectrum.  We call such operators exactly solvable. A recent generalization is the notion of a quasi-exactly solvable operator.  Here again, we can represent our operator as a matrix relative to a polynomial basis. However, now  the matrix is not upper triangular, but does possess a finite-dimensional invariant block. Richard Wood , "Adjoint Functors", Wednesday, May 4 Abstract: Categories, functors, natural transformations --- almost every mathematician has heard of these terms and has some understanding of them, in so far as they pertain to the category of objects with which he or she works. Most mathematicians know that you need the first term to define the second, the second to define the third and that natural transformations' were sighted long before the others. For example, it was known that, for every vector space V, there is a natural linear map' from V to its double dual V**. It's natural because you can describe it without mentioning, or even knowing, a basis for V. In fact it really doesn't have anything to do with vectors. (It has more to do with the natural function X--->PPX, where P denotes the power-set construction. But I digress.) You further need natural transformations to define `adjoint functors' which are much more interesting than anything that appears earlier in the sequence. Unfortunately, most textbook appendices that purport to tell you everything you {\em really} need to know about CT run out of steam well before adjoints or make them look very messy, technical, unappealing and useless. In fact, adjoints provide the means to compute in categories. Almost all interesting constructions in mathematics are adjoint to some other functor, very often a trivial functor --- even when the construction in question is itself highly non-trivial. (If you have an interesting construction that doesn't appear to be adjoint to anything else it's sometimes an indication that the categories in question are not as artfully defined as they should be.) Finally, just as the notion of {\em isomorphism} becomes truly elegant in a category, so the notion of adjunction becomes truly elegant, and its relation to isomorphism is exposed, when the concept is explored in a 2-category. To make this as simple as possible will be the goal of the talk. Jin Yue , "The Gauss Bonnet Theorem", Monday, April 11 Abstract: In elementary geometry, for example, in the Euclidean 2-space, the sum of the interior angles of a (geodesic) triangle is 2\pi, we can consider the similar problems for other spaces, e.g., (geodesic) triangles in a 2-sphere or a hyperbolic space of dimension 2. What will happen? For a surface in $R^3$, or more generally, a Riemannian manifold which is a natural generalization of surfaces, a central issue is to understand its topological structure . We need to use some invariants of the surface to get some information about its topology. What we knew mostly about the surface is its first and second fundamental forms from which one can form various curvatures, Gauss curvature which in fact depends only on the first fundamental form, is intrinsic -- a property that is preserved under isometries. The Gauss-Bonnet theorem then establishes a bridge between the Gauss curvature and the topology of the surface, which is one of the deepest and most beautiful results in differential geometry. We will state explicitly the 2-dimensional Gauss-Bonnet theorem. First give the definition of the Gaussian curvature in a simple way, then some interesting applications, e.g., the sum of the interior angles of a geodesic triangles in 2-dim space form; the special case of the theorem for closed surfaces; the Hadamard theorem stating that any two closed geodesics in an orientable closed surface with positive Gauss curvature must intersect (this is a generalization of the fact that any two great circles in a sphere must intersect); etc. For these applications, I will give the proofs if time permits. I will then move to something about its generalizations -- Chern's theorem. Chern's theorem is one of S. S. Chern's  most important work. which is part of the work for which he received the Wolf prize --  ''For his outstanding contributions to global differential geometry which influence all mathematics". After Chern's theorem, the Atiya-Singer index theorem which contains Chern's theorem as a special case is another great theorem in Math. Except its important applications, the idea contained in Gauss-Bonnet theorem has great influence to Global Riemannian geometry. Now curvature and topology is a central issue in Riemannian geometry. Riemannian geometry is closely related to topology, analysis, mathematical physics, algebraic geometry, etc. We need to thank the Gauss-Bonnet theorem. Maybe without the Gauss-Bonnet theorem, the classical differential geometry couldn't have changed its direction to global problems earlier and so global Riemannian geometry couldn't develop so fast, thus mathematics would not look so nice as now. But the idea behind the Gauss-Bonnet's theorem is surprisingly simple. Jeffery Praught , "Hormonal effects on glucose regulation", Friday, April 1 Abstract: A dynamical-systems model of plasma glucose concentration, and its regulation by insulin and glucagon, is described, as pertains to type 1 and 2 diabetes.  The hyperglycemic case is seen to be dependent only on insulin concentration, while the hypoglycemic case requires consideration of both insulin and glucagon concentration.  The role of healthy alpha-cells in maintaining proper levels of glucose and the hormones is also highlighted. In this  talk we will discuss a mathematical model of diabetes. Sigbjorn Hervik , "Symmetries and Lie groups", Monday, March 21 Abstract: "I am certain, absolutely certain that...these theories will be recognized as fundamental at some point in the future."  Sophus Lie said these words more than one hundred years ago. Today the notions of  "Lie groups" and "Lie algebras" are in the vocabulary of most mathematicians and theoretical physicists; Lie's theories have become indispensable tools for understanding the physical laws of Nature. In this seminar we will provide some examples of theories where Lie groups  play an important role and we will give an introduction to the concepts of Lie groups and Lie algebras. Steven Noble , "p-adic Tools Involved in the ABC-Conjecture", Friday, March 11 Abstract: The intention of this talk is to introduce the uninitiated to p-adic numbers, the ABC-conjecture, and how the two can work together. The p-adic norm is an alternate way to measure the distance between rational numbers by measuring divisibility.  Just as the Reals are the completion of the Rationals under the standard norm the p-adic numbers are the completion under the p-adic norm.  As an alternate completion the p-adic numbers are interesting in their own right.  They also allow for certain sums to converge that do not converge regularly.  This can be useful in result in number theory. The ABC conjecture states that for an epsilon>0 there exists a constant C such that for any pair relatively prime numbers a,b, where rad(n) is the product of the primes that divide n. This is interesting because there is a range of modern number theory problems that follow easily from this conjecture.   This includes statements about p-adic numbers which in turn make statements about the natural numbers.  In this I talk I will discuss one such set of statements. David Iron , "Stability and Dynamics of Multi-spike Solution", Friday, March 4 Abstract: The study of pattern formation in reaction-diffusion equations dates back to the work of Turing in the early 50's.  He proposed that the formation of chemical patterns could be one of the mechanisms responsible for the generation of localized structures during embryonic  development.  In the 1970's Gierer and Meinhardt performed numerical simulations on a reaction-diffusion system designed to explain some of the experimental results from the study of hydra development. I will start with a brief history of the topic.  I will then go on to construct pattern solutions in the Gierer-Meinhardt equations.  I will then go on to examine the stability  and the dynamic interaction of these patterns. S. Swaminathan , "Hilbert's Problems (continued)", Monday, February 28 Abstract: Although the title says 'continued', the talk will be independent of the first one. A recapitulation of the Introduction to the 1900 Paris Congress and a quick review of the first ten problems will be given in the first part of the talk. Then the talk will focus on Problems 11 to 23. Josh MacArthur , "Solving Linear Systems of First Order PDEs", Friday, February 18 Abstract: As is well known, the method of characteristics which traces back to Lagrange, is a standard and powerful technique for solving systems of  linear homogeneous PDEs. As a result, it is a preferred choice for dealing with a wide range of PDEs arising in various areas of Mathematics and Physics. However, as is the case with many mathematical techniques, the method of characteristics has its limitations. Recently, I have encountered these limitations first hand in my research and learned that one can employ a technique which can effectively alleviate the difficulties arising from them. I will discuss this technique and present illustrative examples. Joey Latta , "Phantom Cosmology", Wednesday, February 16 Abstract: Recent astronomical data suggests that the expansion of the universe is actually increasing. There have been many attempts to explain such a phenomenon. One of the most curious explanations is that of phantom energy, ie. a type of energy with negative pressure. The purpose of this talk is to examine the standard cosmological model with such an energy, and determine whether or not phantom energy “makes sense”, in that it preserves the successes of the standard model, and explains the observed Perlmutter-Riess acceleration. S. Swaminathan , "Hilbert's Problems (to be continued)", Friday, February 11 Abstract: In the Paris International Congress of Mathematicians, 1900, David Hilbert delivered a talk on 'Mathematical Problems' suggesting 23 unsolved problems which influenced the development of mathematics in the 20th century. It is proposed to present the story of the problems and the solutions achieved. Richard Hoshino , "Roots of Independence Polynomials", Wednesday, February 2 Abstract: In a graph G, a set of vertices S is independent if no two vertices in S are adjacent.  For any graph G, we define the independence polynomial as I(G,x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + ..., where a_k is the number of independent sets in G of cardinality k. For example, if G is the 6-cycle C_6 (with vertices 1,2,3,4,5,6 in that order), then I(C_6 ,x) = 1 + 6x + 9x^2 + 2x^3, since there is 1 independent set of size 0 (the empty set), 6 independent sets of size 1 (each of the six vertices), 9 independent sets of size 2 (the sets {13},{14}, {15}, {24}, {25}, {26}, {35}, {36}, {46}), and 2 independent sets of size 3 (the sets {135} and {246}). In this talk, we will investigate the roots of I(C_n, x) with the hope of determining some interesting properties.  Using Maple, it appears that the root of largest magnitude is approximately -n^2/10 if n is even, and -n^2/40 if n is odd.  At first glance, there is no apparent reason why this should be true.  The analysis is especially difficult because there is no obvious way to determine a formula for the roots of I(C_n, x). But if we examine the Chebyshev polynomial T_n(x), we can make a beautiful connection between T_n(x) andI(C_n, x), and we will develop a method to compute the roots of I(C_n, x) explicitly.  As a corollary, we will show that the root of largest magnitude is approximately -n^2/10 if n is even, and -n^2/40 if n is odd.  (Hint: Pi^2 is very close to 10). We will conclude the talk by mentioning some other results on independence polynomials, which form a chapter of my Ph.D thesis. No knowledge of graph theory or polynomial theory will be assumed - the only prerequisite is a knowledge of L'Hopital's Rule. Larissa Lorenz (University of Waterloo and University of Jena), "Short Distance Modifications in Inflation and The Quest for The Right Vacuum", Wednesday, December 1 Abstract: Inflation provides us with a mechanism for generating large scale structure: It traces the origin of galaxies and other structures back to small quantum fluctuations in the inflation field. Imprints of these Fluctuations are today observable as small anisotropies in the CMB radiation. Recent studies have shown that inflation might even be able to predict imprints of as yet unknown small scale physics in the CMB power spectrum. In this context, an interesting model for short distance physics is that of a finite minimum length uncertainty, which expresses an ultraviolet cutoff at some natural scale such as the  Planck or string scale. In this talk, I will show how the mode equation for inflaton field  modes is modified in the presence of this cutoff, and I will present  its exact solutions. The choice of solution corresponds to the choice of the vacuum and I will examine various criteria. These results should enable us to better address the issue of vacuum energy production in the expanding Universe. Givanni Ratelli(University of Turin, Italy), "Integration by separation of variables of the Hamilton-Jacobi equation: The first 100 years of the Levi-Civita criterion", Wednesday, November 17 Abstract: In 1904 Tullio Levi-Civita derived necessary and sufficient conditions for integrability of Hamiltonian systems by the method of separation of variables. I will discuss the developments of this result during the last one hundred years. Jonathan M. Borwein, "Maximum Entropy Methods for Inverse Problems", Wednesday, September 29 Abstract: I shall discuss in "tutorial mode" the formalization of inverse problems  such as signal recovery and option pricing as (convex and non-convex) optimization problems over the infinite dimensional space of signals. Maintained by: Andrew Hoefel and Rob Noble Chase Building | Dalhousie University | Halifax, Nova Scotia, Canada B3H 3J5 |
{}
Latest release # Use of Price Indexes Producer and International Trade Price Indexes: Concepts, Sources and Methods Reference period 2022 Released 29/04/2022 Next release Unknown First release This section of the publication provides users with a detailed breakdown of the key users and use of the Producer and International Trade Price Indexes. ### National Accounts and Balance of Payments – Deflation Principle The Producer and International Trade Price Indexes are used to deflate values of a number of components in the Australian National Accounts, including industry inputs and outputs, sales, capital expenditure and inventory data to produce chain volume measures. The deflation process is integral to the compilation of Gross Domestic Product and its components. In addition, International Trade Price Indexes are used in the compilation of Balance of Payments Chain Volume Measures. Price deflation is achieved by dividing the current price value for a period (quarter or year) by a measure of the price component (usually in the form of a price index) for the same period. This technique re-values the current price value in the prices of a base period (in the Australian volume measures this is generally the previous year)¹. Revaluation of the current period values using earlier period prices is defined in the following format: $$\frac{V^t}{\big(\frac{P^t}{P^{t-1}}\big)}=P^{t-1}Q^t$$ $$\Delta Q=\frac{P^{t-1}Q^t}{P^{t-1}Q^{t-1}}$$ Where $$V$$ refers to value, $$P$$ refers to price, $$Q$$ refers to quantity (or in National Accounts terminology, volume), and the superscripts $$t \space t-1$$ refer to current and previous periods respectively. More information on the use of price indexes in the production of the Australian National Accounts can be found through the following sources: ### Footnotes ¹ The result is, in concept, equivalent to quantity revaluation (i.e. directly revaluing individual products by multiplying their quantity produced or sold in each period by their price in a related base period), since it removes changes in the price component of the current price value, leaving a measure that reflects the volume (or quantity) component valued at the base period prices.
{}
# Prove periodicity of exp/sin/cos from Taylor series? 1. Nov 24, 2008 ### Gerenuk How is it possible to see that $exp(i\phi)$ is periodic with period $2\pi$ from the Taylor series? So basically it boils down to if is it easy to see that $$\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}(2\pi)^{2n}=1$$ ? Or any other suggestions? 2. Nov 24, 2008 ### lurflurf How have you defined pi? If you have defined it as half the period of exp(ix) it should be easy. 3. Nov 24, 2008 ### Gerenuk Good point. Actually I like the idea of defining pi this way. So I only have to prove that such a number exists? Could I reverse the series to get a series for pi defined this way? 4. Nov 25, 2008 ### HallsofIvy Staff Emeritus From the Taylor's series for sine and cosine, it is easy to show, differentiating term by term that (sin x)"= -sin(x) and (cos(x))"= -cos(x). That is, sin(x) satisfies the differential equation y"= -y with the initial conditions y(0)= 0, y'(0)= 1 and cos(x) satisfies the differential equation y"= -y with the initial conditions y(0)= 1, y'(0)= 0. y"= -y is a linear second order differential equation and it is easy to show that any solution, y(x), to that differential equation is of the form y(x)= A cos(x)+ B sin(x) where A= y(0) and B= y'(0). From that, and the theorem "The set of all eigenvalues of the Sturn-Liouville problem $$\frac{d}{dt}\left(p(t)\frac{dy}{dt}\right)+ (\lambda+ q(t))y= 0$$ with boundary conditions y(0)= 0, y(1)= 0, form an increasing unbounded sequence", with p(t)= 1, q(t)= 0, one can prove the periodicity of sine and cosine. And then, of course, the fact that $e^{ix}= cos(x)+ i sin(x)$ gives the periodicity of eix. 5. Nov 25, 2008 ### Gerenuk Can you help me with the reasoning? I know that for y=exp(ix) y''+y=0 I know that in y''+a*y=0 the eigenvalues a form an unbounded sequence. How do I know that the boundary condition y(1)=0 is satisfied? What to conclude? 6. Nov 25, 2008 ### HallsofIvy Staff Emeritus I have been hoping for someone to ask! Theorem: The set of all eigenvalues of $d^y/dx^2+ \lambda y= 0$, with boundary conditions y(0)= 1, y(1)= 0 form an increasing, unbounded sequence. (Notice that I am starting from that theorem: I am not showing that eix satisfies it.) The first thing that tells us is that the problem has eigenvalues. What are they? If we were to try $\lambda= 0$, the problem, becomes y"= 0 and integrating twice, y= Ax+ B so y(0)= B= 0 and y(1)= A+ B= A= 0. Both A and B are 0: there is no non-trivial function satisifying the differential equation and the boundary conditions so 0 is not an eigenvalue. Lets try $\lambda< 0$. To make that explicit, write $\lambda= -\alpha^2$ where $\alpha$ can be any non-negative integer. Now the differential equation is $y"- \alpha^2y= 0$ and we know, from elementary differential equations, that $y(t)= Ae^{\alpha t}+ Be^{-\alpha t}$. $y(0)= A+ B= 0$ and $y(1)= Ae^{\alpha}+ Be^{-\alpha}$. From the first equation, B= -A. Putting that into the second equation and factoring out A, $A(e^{alpha}- e^{-\alpha})= 0$. Since ex is a one-to-one function, those two exponentials cannot be the same and $e^{\alpha}- e^{-\alpha}$ cannot be 0: A=0 and B= -A= 0. Since A and B are both 0, y is identically 0 and $-\alpha^2$ is not an eigenvalue. No negative number is an eigenvalue. Since we know this problem has eigenvalues and they are neither 0 nor negative, the eigenvalues must be positive. Further, the set of eigenvalues forms an increasing sequence. Since every sequence has a first member, and this sequence is increasing, the first member in the sequence is the smallest positive eigenvalue. Let $\lambda_1$ be the smallest eigenvalue for this problem. Change to a new variable: $x= \sqrt{\lambda_1}t$ so that $d^2y/dt^2= (\sqrt{\lambda_1}(\sqrt{\lambda_1} d^2y/dx^2$ so the equation becomes $$d^y/dt^2+ \lambda_1y= \lambda_1 d^y/dx^2+ \lambda_1 y= 0$$ so $$d^2y/dtx2+ y= 0$$ Of course, we also have to change the boundary values. When t= 1, x= $\sqrt{\lambda_1}(0)$= 0 and when t= 1, x= $\sqrt{\lambda_1}(1)= \sqrt{\lambda_1}$. The general solution to y"+ y= 0 is, as we have already seen, y(x)= A cos(x)+ B sin(x). y(0)= A= 0 and then $y(\sqrt{\lambda_1})= B sin(\sqrt{\lambda_1})= 0$. That looks a lot like what happened above with the exponentials but there is an important difference: we know that this "smallest eigenvalue" exists so there must be a non-trivial solution: y(x) is not identically 0 which means the two constants, A and B, cannot both be 0. Since A obviously is 0 and $Bsin(\sqrt{\lambda_1})= 0$ we must have $sin(\sqrt{\lambda_1})= 0$. Okay, that proves that sine is NOT one-to-one, $sin(0)= sin(\sqrt{\lambda_1})$, but what about all of the numbers between 0 and $\sqrt{\lambda_1}$? Is this enough to prove that sine is periodic? No, it is not: we need to look at cosine also. We don't have to repeat all of this for the cosine: $sin^2(\sqrt{\lambda_1})+ cos^2(\sqrt{\lamda_1})= 1$ so, since $sin(\sqrt{\lambda_1})= 0$ we have $cos^2(\sqrt{\lambda_1})= 1$ and then $cos(\sqrt{\lambda_1}= -1$. Yes, that's right, -1. When I first did this calculation I wrote, automatically, "1" and got myself into a terrible mess. Since $cos^2(\sqrt{\lambda_1})= 1$ $cos(\sqrt{\lambda_1})$ must be either -1 or 1. To see that it can't be 1, look at the half angle formula: $sin(\sqrt{\lambda_1}/2)= \sqrt{(1/2)(1- cos(\sqrt{\lambda_1})}$. IF $cos(\sqrt{\lambda_1})= 1$ then $sin(\sqrt{\lambda_1}/2)= 0$. But that would mean that y(x)= sin(x) is a non-trivial function satisfying y"+ y= 0, y(0)= 0 and $y(\sqrt{\lambda_1}/2)= 0$, meaning that $\sqrt{\lambda_1}$ is an eigenvalue, contradicting the fact that $\lambda_1$ is the smallest eigenvalue. $cos(\sqrt{\lambda_1})$ cannot be 1 so it must be -1. Now use the double angle formulas: $cos(2\sqrt{\lambda_1})= cos^2(\sqrt{\lambda_1})- sin^2(\sqrt{\lambda_1})= (-1)^2- 0^2= 1$ $sin(2\sqrt{\lambda_1})= 2sin(\sqrt{\lambda_1})cos(\sqrt{\lambda_1})= 2(0)(-1)= 0$ That is, sin(x) is 0 at 0 and at $2\sqrt{\lamba_1}$ and cos(x) is 1 at 0 and $2\sqrt{\lambda_1}$. Is that enough to prove that sine and cosine are periodic? Yes, it is! Use the sum formulas: for any x, sin(x+ 2\sqrt{\lambda_1})= sin(x)cos(2\sqrt{\lambda_1})+ cos(x)sin(2\sqrt{\lambda_1})[/itex]= sin(x)(1)+ cos(x)(0)= sin(x). $cos(x+ 2\sqrt{\lambda_1})= cos(x)cos(2\sqrt{\lambda_1})- sin(x)sin(2\sqrt{\lambda_1})$= cos(x)(1)- sin(x)(0)= sin(x). But what is $\sqrt{\lamda_1}$? Since $sin^2(t)+ cos^2(t)= 1$ and is periodic with period $2\sqrt{\lambda_1}$, we can use x= Rcos(t), y= Rsin(t) as parmetric equations for a circle of radius R. The circumference of that circle is given by the arclength integral: $$\int_0^{\sqrt{\lambda_1}} \sqrt{(dx/dt)^2+ (dy/dt)^2}dt= \int_0^{\sqrt{\lambda_1} R dt= 2\sqrt{\lambda_1} R. Of course, that circumference "$\pi$ times the diameter" or "2 $\pi$ R" so $\sqrt{\lambda_1}= \pi$ and sine and cosine are periodic with period $2\pi$. Last edited: Dec 2, 2008 7. Nov 26, 2008 ### Gerenuk That looks really complicating. I think it's equivalent to argue that the solution to the differential equation is $y(x)=A\sin(\sqrt{\lambda} x)$ and by the boundary conditions one can see that the function has to repeat at least one value. Is Sturm-Liouville really this way? With my first search attempt I couldn't find anything about these imposed boundary condition (which basically define the periodicity). I would suspect that to prove this boundary condition version of SL one would actually assume the periodicity of exp(ix). 8. Nov 26, 2008 ### HallsofIvy Staff Emeritus Since you asked about proving periodicity, it would be really silly to assume periodicity! The "Sturm-liouville" theorem I quoted has nothing to do with periodicity. Cetainly, saying that y(0)= y(1)= 0 does not require periodicity. In particular y= x2- x satisfies y(0)= y(1)= 0 without being periodic. 9. Nov 26, 2008 ### Gerenuk Could you point me to an internet link with this theorem, where they have the same boundary conditions? I heard about SL before, but not about the boundary condition part. That would be in important ingredient. What I said is the SL with these boundary condition should be checked, if it doesn't assume periodicity of exp(ix). Otherwise you can't use it as a prove. Once you know the sin(x) is the solution to the above equation, then saying y(0)=y(1)=0 is equivalent with saying the the function is periodic (which in fact you did in the prove). 10. Nov 26, 2008 ### HallsofIvy Staff Emeritus 11. Nov 26, 2008 ### Gerenuk I cannot remember all of SL. I'll check that again. It's weird that you didn't understand what I wrote in the previous reply. One needs to check the derivation of SL to make sure it doesn't already assume periodicity of exp(ix) to deduce results. But that's what I do next. Maybe then the issue would be resolved. I mean you wouldn't need fancy SL to prove it. Just say I know the sum rule for sin(x). Let's assume y(0)=y(1)=0 and y(x)=sin(ax). But hey, that assumption(!) already proves that sin(x) is periodic. 12. Nov 26, 2008 ### HallsofIvy Staff Emeritus NO! Knowing that a function is 0 at two different points does NOT prove it is periodic. Use the example I gave before. Would you say :"Let's assume y(0)=y(1)=0 and y(x)= x2. But hey, that assumption(!) already proves that x2 is periodic"? In any case, I am not assuming that y(0)= y(1)= 0 for y(x) a trig function. I used the theorem to prove[\b] that sine of a certain number is 0, which, as I said, does not, by itself, prove that sine is periodic. If this proof is too complicated for you, I guess you will just have to go with one of the other, simpler, proofs given in response to your question. 13. Nov 26, 2008 ### Gerenuk I appreciate your suggestion with SL, but I advice you to read the posts fully before objecting. I wrote knowing about y(0)=y(a) and knowing the sum rules proves periodicity. In fact you used it yourself in your prove. 14. Dec 2, 2008 ### HallsofIvy Staff Emeritus I have corrected a Latex expression that may have caused the difficulty. However, the point is that I prove that "we must have $sin(\sqrt{\lambda_1})= 0$", I do not assume it. That is the whole point of the proof. 15. Dec 2, 2008 ### Gerenuk Yes, I know. I haven't had time to check yet: But are you sure that the prove of the SL theorem you use doesn't make use of the periodicity of exp(ix)? 16. Dec 2, 2008 ### maze This is actually kind of related to another recent thread. If you show that exp( i x ) satisfies the DE g' = g, g(0)=1, (easy to do term by term), then letting g(x)=u(x)+iv(x) for some real u and v, you can compute [tex]\frac{d}{dx} |g|^2= \frac{d}{dx}(u^2+v^2)=2uu'+2vv'=-2uv+2vu=0.$$ Since the derivative of g is zero and g(0)=1, we have |exp( i x )| = |g(x)| = constant = 1. This shows g lies on the unit circle in the complex plane. We also know that |g'| = |g|, so $$\frac{d}{dx}|g'(x)|^2 = \frac{d}{dx}|g(x)|^2 = 0$$ Since g'(0) = ig(0) = i, we have |g'(x)| = constant = 1. Then g is a unit speed parameterization of the unit circle, so the period of g is the arc-length of a circle: 2pi. 17. Dec 3, 2008 ### Gerenuk That seems an OK argumentation to me. I'd even argue $$|\exp(\mathrm{i}x)|=\exp(\mathrm{i}x)\exp(\mathrm{i}x)^*=\exp(\mathrm{i}x)\exp(-\mathrm{i}x)=1$$ assuming some basic rules. That in the end shows the existence of pi. Is it possible to derive some sort of series for this constant pi defined this way? (see my initial question) 18. Dec 3, 2008 ### lurflurf ^ Must it be a series you have pi=lim_{n->infinity} -i n[-1+(-1)^(1/n)] and the series for Arctan Arcsin ect
{}
# Example of a PRP that is not a strong PRP The exact definition of security for a pseudorandom permutation is straightforward - for some encryption scheme $E\,\colon\,\mathcal{K}\times\mathcal{D}\rightarrow\mathcal{D}$, it must be the case that no efficient adversary can distinguish $E_k(\cdot)$ from $\Pi(\cdot)$ (a random permutation on $\mathcal{D}$) except with negligible probability. A "strong" PRP is defined the same way, except $E_k(\cdot)$ and decryption $D_k(\cdot)$ must be jointly indistinguishable from $\Pi(\cdot)$ and $\Pi^{-1}(\cdot)$. I've thought about this but I can't quite come up with an example of an $E$ whose encryption is indistinguishable from a random permutation but its decryption doesn't have the same property. Does anyone have a nice example? EDIT: I saw DW's example in the answer of this question. It is a very clever construction but quite 'unnatural' as well. Are there more straightforward examples? • This is the first time I encounter this definition, it's probably equivalent to the usual one but it's very weird. – fkraiem Feb 21 '16 at 0:32 • I saw DW's example but it is a little bit unsatisfying, in a sense - I was hoping for a more 'natural' example that says something about the difference between the two definitions. – pg1989 Feb 21 '16 at 0:33 • @fkraiem I tried to write it out formally but the LaTex was being weird. – pg1989 Feb 21 '16 at 0:36 • It seems like your definition of (strong) PRP is not equivalent to the usual one, since for that, they need to be jointly indistinguishable from [a random permutation and its inverse], rather than just each indistinguishable from a random permutation. ​ ​ – user991 Feb 21 '16 at 1:05 • Yeah, I'll edit it to make that clear. – pg1989 Feb 21 '16 at 1:07 A three-round Feistel network is a good example of a realistic construction that is a secure "weak" PRP, but not a "strong" PRP. A Feistel network uses the permutation $P_f(L, R) = R, (L\oplus f(R))$, where $f$ is an element of a pseudorandom function family. This PRP will be keyed with three keys $k_1, k_2, k_3$, which will be used to key a PRF $F$ differently each round. We define $E_{k_1,k_2,k_3}$ to be a three-round Feistel network: $E_{k_1,k_2,k_3}(L \| R) = \operatorname{Concat}(P_{F_{k_3}}(P_{F_{k_2}}(P_{F_{k_1}}(L, R))))$ Assuming $F$ is a PRF, $E$ will meet the definition of a weak PRP. I believe this proof is originally attributed to Luby, Rackoff (for details of the proof, see here, starting on page 11). Similarly, the inverse $D_{k_1,k_2,k_3} = E^{-1}_{k_1,k_2,k_3}$ is also a weak PRP. Interestingly, though, $E$ is not a strong PRP. When given simultaneous access to both a "forward" oracle and a "backward" oracle, an adversary can distinguish between $(E_{k_1,k_2,k_3}(\cdot), D_{k_1,k_2,k_3}(\cdot))$ and $(\Pi(\cdot), \Pi^{-1}(\cdot))$, where $\Pi$ is a randomly selective permutation on the same domain. Here is an adversary that distinguishes the two with high probability: • Query the decryption oracle with two strings of zero bits: $(a\|b) \leftarrow D(0\|0)$ • Query the encryption oracle: $(c\|d) \leftarrow E(0\|a)$ • Query the decryption oracle again: $(e\|f) \leftarrow D((b\oplus d)\|c)$ • If $e=c\oplus a$, then return $1$, else return $0$. Here's why this works: • By expansion, we see that $D_{k_1,k_2,k_3}(L\|R) = (x\|y)$, where: $x=R \oplus F_{k_2}(L \oplus F_{k_3}(R))$ $y=L \oplus F_{k_3}(R) \oplus F_{k_1}(R \oplus F_{k_2}(L\oplus F_{k_3}(R)))$ • It follows that the first oracle query will result in: $a=F_{k_2}(F_{k_3}(0))$ $b=F_{k_3}(0) \oplus F_{k_1}(a)$ • By expansion, we see that $E_{k_1,k_2,k_3}(L\|R) = (x\|y)$, where: $x=R \oplus F_{k_2}(L \oplus F_{k_1}(R))$ $y=L \oplus F_{k_1}(R) \oplus F_{k_3}(R \oplus F_{k_2}(L\oplus F_{k_1}(R)))$ • It follows that the second oracle query will result in: $c=a \oplus F_{k_2}(F_{k_1}(a))$ and $d=F_{k_1}(a) \oplus F_{k_3}(c)$. • Note that $b$ and $d$ both contain the term $F_{k_1}(a)$. When we compute $b\oplus d$, the terms cancel: $b\oplus d=F_{k_3}(0) \oplus F_{k_3}(c)$ • Finally, in the third oracle query, the specifically crafted $L$ and $R$ cause the following simplification: $e=c \oplus F_{k_2}((b\oplus d) \oplus F_{k_3}(c))\\=c \oplus F_{k_2}(F_{k_3}(0))\\=c \oplus a$ • The adversary finds that $e=c\oplus a$ as required, which would only be expected with low probability for a truly random permutation. The basic idea is to set things up so that $F_{k_2}$ receives the same input in two different queries. This causes the left oracle output to be masked with the same value, which can be detected by the adversary. Crucially, this attack would not work without the ability to query the permutation in both directions. • Very nice (and natural) example. Thanks for writing this up. – pg1989 Feb 21 '16 at 23:03 For convenience, let's assume that $\mathcal{K} = \mathcal{D}$ so that the key $k \in \mathcal{D}$. Define $E$ to be some strong PRP, and let $D$ be its inverse. Now, define a PRP $E' : \mathcal{D} \times \mathcal{D} \to \mathcal{D}$ such that $E'_k(x) = E_k(x)$ for all values of $k$ and $x$, but with the following adjustments: • Define $E'_k(k) = 0$ for all values of $k$. The choice of $0$ is arbitrary; any fixed value known to the adversary will do. • Define $E'_k(D_k(0)) = E_k(k)$ so that $E'_k$ remains a permutation. An adversary will have great difficulty distinguishing $E'(\cdot)$ from a random permutation $\Pi(\cdot)$, since it does not know the key $k$. However, when given access to a decryption oracle $D'_k(\cdot)$, the adversary can query $D'_k(0)$, discover the key $k$, and perform an additional query to determine whether or not the permutation is random.
{}
# Can I run Orange widgets from normal Python scripts? I'd like to do normal machine learning with Python and Sklearn, but occasionally use the nice graphical widgets or Orange for presentation. Is this possible? I don't think Orange was ever intended to be used that way, but if you can convert your data into Orange.data.Table, you might be able to instantiate the widgets you need with it. See the if __name__ == '__main__' block at the bottom of most widgets for some example.
{}
# Do the rows of the design matrix refer to the observations or predictors? I attempt to understand the formulation of dictionary learning for this paper: Both papers used the exact formulation in two different domains. Part 1: Clarification on math notations Based on my understanding, in common machine learning, we formulate our matrices, from vectors, as rows to be observations, columns to be predictors. Given a matrix, $$A$$: $$p_1$$ $$p_2$$ $$p_3$$ $$p_4$$ $$p_5$$ label $$o_1$$ 1 2 3 4 1 1 $$o_2$$ 2 3 4 5 2 1 $$o_3$$ 3 4 5 6 2 0 $$o_4$$ 4 5 6 7 3 0 So using math notation and excluding label, I can define this matrix, $$A = [o_1, o_2, o_3, o_4] ∈ R^{4×5}$$, as $$A = [{(1, 2, 3, 4, 1), (2, 3, 4, 5, 2), (3, 4, 5, 6, 2), (4, 5, 6, 7, 3)}]$$, and in numpy: import numpy as np A = np.array([[1, 2, 3, 4, 1], [2, 3, 4, 5, 2], [3, 4, 5, 6, 2], [4, 5, 6, 7, 3]]) A.shape # (4, 5) Am I right?
{}
## 3.6 Derivatives of Logarithmic Functions Today, we will find the derivative of y = ln x using the fact that it is the inverse of the function y = e^x.  There are a couple of different ways to determine this, and we will make use of the properties of logarithms to differentiate more complicated logarithmic functions as well. Assignment p170 #19-39 Attachments: FileDescriptionFile size 3.6.notes.pdf 419 kB
{}
Creating data_frame() is a nice way to create data frames. It encapsulates best practices for data frames: • It never changes an input’s type (i.e., no more stringsAsFactors = FALSE!). data.frame(x = letters) %>% sapply(class) #> x #> "factor" data_frame(x = letters) %>% sapply(class) #> x #> "character" This makes it easier to use with list-columns: data_frame(x = 1:3, y = list(1:5, 1:10, 1:20)) #> Source: local data frame [3 x 2] #> #> x y #> (int) (chr) #> 1 1 <int[5]> #> 2 2 <int[10]> #> 3 3 <int[20]> List-columns are most commonly created by do(), but they can be useful to create by hand. • It never adjusts the names of variables: data.frame(crazy name = 1) %>% names() #> [1] "crazy.name" data_frame(crazy name = 1) %>% names() #> [1] "crazy name" • It evaluates its arguments lazily and sequentially: data_frame(x = 1:5, y = x ^ 2) #> Source: local data frame [5 x 2] #> #> x y #> (int) (dbl) #> 1 1 1 #> 2 2 4 #> 3 3 9 #> 4 4 16 #> .. ... ... • It adds the tbl_df() class to the output so that if you accidentally print a large data frame you only get the first few rows. data_frame(x = 1:5) %>% class() #> [1] "tbl_df" "tbl" "data.frame" • It changes the behaviour of [ to always return the same type of object: subsetting using [ always returns a tbl_df() object; subsetting using [[ always returns a column. You should be aware of one case where subsetting a tbl_df() object will produce a different result than a data.frame() object: df <- data.frame(a = 1:2, b = 1:2) str(df[, "a"]) #> int [1:2] 1 2 tbldf <- tbl_df(df) str(tbldf[, "a"]) #> Classes 'tbl_df' and 'data.frame': 2 obs. of 1 variable: #> $a: int 1 2 • It never uses row.names(). The whole point of tidy data is to store variables in a consistent way. So it never stores a variable as special attribute. • It only recycles vectors of length 1. This is because recycling vectors of greater lengths is a frequent source of bugs. Coercion To complement data_frame(), dplyr provides as_data_frame() to coerce lists into data frames. It does two things: • It checks that the input list is valid for a data frame, i.e. that each element is named, is a 1d atomic vector or list, and all elements have the same length. • It sets the class and attributes of the list to make it behave like a data frame. This modification does not require a deep copy of the input list, so it’s very fast. This is much simpler than as.data.frame(). It’s hard to explain precisely what as.data.frame() does, but it’s similar to do.call(cbind, lapply(x, data.frame)) - i.e. it coerces each component to a data frame and then cbinds() them all together. Consequently as_data_frame() is much faster than as.data.frame(): l2 <- replicate(26, sample(100), simplify = FALSE) names(l2) <- letters microbenchmark::microbenchmark( as_data_frame(l2), as.data.frame(l2) ) #> Unit: microseconds #> expr min lq mean median uq #> as_data_frame(l2) 102.631 113.7605 135.0108 121.490 142.8445 #> as.data.frame(l2) 1524.588 1619.3670 1901.9522 1739.363 2063.0425 #> max neval cld #> 318.934 100 a #> 3705.862 100 b The speed of as.data.frame() is not usually a bottleneck when used interactively, but can be a problem when combining thousands of messy inputs into one tidy data frame. Memory One of the reasons that dplyr is fast is that it is very careful about when it makes copies. This section describes how this works, and gives you some useful tools for understanding the memory usage of data frames in R. The first tool we’ll use is dplyr::location(). It tells us the memory location of three components of a data frame object: • the data frame itself • each column • each attribute location(iris) #> <0x7fc5ad268b40> #> Variables: #> * Sepal.Length: <0x7fc5ad1f8a00> #> * Sepal.Width: <0x7fc5ad258a00> #> * Petal.Length: <0x7fc5ad274000> #> * Petal.Width: <0x7fc5ad273200> #> * Species: <0x7fc5ae8364e0> #> Attributes: #> * names: <0x7fc5ad268ba8> #> * row.names: <0x7fc5ae817410> #> * class: <0x7fc5ad294798> It’s useful to know the memory address, because if the address changes, then you’ll know that R has made a copy. Copies are bad because they take time to create. This isn’t usually a bottleneck if you have a few thousand values, but if you have millions or tens of millions of values it starts to take significant amounts of time. Unnecessary copies are also bad because they take up memory. R tries to avoid making copies where possible. For example, if you just assign iris to another variable, it continues to the point same location: iris2 <- iris location(iris2) #> <0x7fc5ad268b40> #> Variables: #> * Sepal.Length: <0x7fc5ad1f8a00> #> * Sepal.Width: <0x7fc5ad258a00> #> * Petal.Length: <0x7fc5ad274000> #> * Petal.Width: <0x7fc5ad273200> #> * Species: <0x7fc5ae8364e0> #> Attributes: #> * names: <0x7fc5ad268ba8> #> * row.names: <0x7fc5aafd30c0> #> * class: <0x7fc5ad294798> Rather than having to compare hard to read memory locations, we can instead use the dplyr::changes() function to highlights changes between two versions of a data frame. The code below shows us that iris and iris2 are identical: both names point to the same location in memory. changes(iris2, iris) #> <identical> What do you think happens if you modify a single column of iris2? In R 3.1.0 and above, R knows to modify only that one column and to leave the others pointing to their existing locations: iris2$Sepal.Length <- iris2\$Sepal.Length * 2 changes(iris, iris2) #> Changed variables: #> old new #> #> Changed attributes: #> old new #> row.names 0x7fc5aac62ab0 0x7fc5aac96a40 (This was not the case prior to version 3.1.0, where R created a deep copy of the entire data frame.) dplyr is equally smart: iris3 <- mutate(iris, Sepal.Length = Sepal.Length * 2) changes(iris3, iris) #> Changed variables: #> old new #> #> Changed attributes: #> old new #> row.names 0x7fc5aaca8ae0 0x7fc5aaca8d60 It creates only one new column while all the other columns continue to point at their original locations. You might notice that the attributes are still copied. However, this has little impact on performance. Because attributes are usually short vectors, the internal dplyr code needed to copy them is also considerably simpler. dplyr never makes copies unless it has to: • tbl_df() and group_by() don’t copy columns • select() never copies columns, even when you rename them • mutate() never copies columns, except when you modify an existing column • arrange() must always copy all columns because you’re changing the order of every one. This is an expensive operation for big data, but you can generally avoid it using the order argument to window functions • summarise() creates new data, but it’s usually at least an order of magnitude smaller than the original data. In short, dplyr lets you work with data frames with very little memory overhead. data.table takes this idea one step further: it provides functions that modify a data table in place. This avoids the need to make copies of pointers to existing columns and attributes, and speeds up operations when you have many columns. dplyr doesn’t do this with data frames (although it could) because I think it’s safer to keep data immutable: even if the resulting data frame shares practically all the data of the original data frame, all dplyr data frame methods return a new data frame.
{}
I wanted to contribute to the PyFladesk project. The objective is to have a function that receives a Flask's app and embeds it in a Qt desktop app. I've already create a Pull Request on the main repository, individual commmits and changes can be viewed there This was the initial version: 83 lines - 3 Classes - 1 Function - Uses PyQt4 import sys,webbrowser from PyQt4.QtGui import QApplication,QMainWindow,QIcon from PyQt4.QtWebKit import QWebView,QWebPage # CONFIG PORT = 5000 ROOT_URL = 'http://localhost:{}'.format(PORT) WIDTH = 300 HEIGHT = 400 ICON = 'appicon.png' # run flask on seperate theared def __init__(self, application): self.application = application def __del__(self): self.wait() def run(self): self.application.run(port=PORT) class MainWindow(QMainWindow): def __init__(self): QMainWindow.__init__(self) self.resize (WIDTH , HEIGHT) self.setWindowTitle(WINDOW_TITLE) self.webView = WebView(self) self.setCentralWidget(self.webView) class WebView(QWebView): def __init__(self ,parent=None): super(WebView,self).__init__(parent) def dragEnterEvent(self,e): e.ignore() def dropEvent(self,e): e.ignore() pass # open links in default browser # stolen from http://stackoverflow.com/a/3188942/1103397 :D webbrowser.open(url.toEncoded().data()) def provide_GUI_for(application): qtapp = QApplication(sys.argv) webapp.start() mainWindow = MainWindow() # set app icon mainWindow.setWindowIcon(QIcon(ICON)) # prevent open urls in QWebView. mainWindow.show() return qtapp.exec_() if __name__ == '__main__': from routes import app provide_GUI_for(app) And this is the updated version: 50 lines - 1 Function - Uses PyQt 5.10.0 import sys from PyQt5 import QtCore, QtWidgets, QtGui, QtWebEngineWidgets def init_gui(application, port=5000, width=300, height=400, ROOT_URL = 'http://localhost:{}'.format(port) # open links in browser from http://stackoverflow.com/a/3188942/1103397 :D # thanks to https://github.com/marczellm/qhangups/blob/cfed73ee4383caed1568c0183a9906180f01cb00/qhangups/WebEnginePage.py is_not_internal = ROOT_URL not in ready_url if is_clicked and is_not_internal: QtGui.QDesktopServices.openUrl(url) return False return True def run_app(): # Application Level qtapp = QtWidgets.QApplication(sys.argv) webapp.__del__ = webapp.wait webapp.run = run_app webapp.start() # Main Window Level window = QtWidgets.QMainWindow() window.resize(width, height) window.setWindowTitle(window_title) window.setWindowIcon(QtGui.QIcon(icon)) # WebView Level window.webView = QtWebEngineWidgets.QWebEngineView(window) window.setCentralWidget(window.webView) # WebPage Level page = QtWebEngineWidgets.QWebEnginePage() window.webView.setPage(page) window.show() return qtapp.exec_() Some of my concerns are: • Is it a good decision not to use classes at all? • Is the code readable and maintainable? • Are there any suggestions to improve it? This was once review here • Original review of PyFladesk. Feb 22 '18 at 15:49 I’m not too found of putting it all into a single method as: 1. it seems to have too much responsibilities; 2. building some objects (QThread, QWebEnginePage) is really awkward when methods are reassigned. I really like using default values for parameters instead of global constants, though. Instead I’d go with an intermediate state where I’d use proper subclassing instead of methods assignment: import sys from PyQt5 import QtCore, QtWidgets, QtGui, QtWebEngineWidgets def __init__(self, application, port=5000): self.application = application self.port = port def __del__(self): self.wait() def run(self): class WebPage(QtWebEngineWidgets.QWebEnginePage): def __init__(self, root_url): super(WebPage, self).__init__() self.root_url = root_url def home(self): """open external links in browser and internal links in the regular webview""" # thanks to https://github.com/marczellm/qhangups/blob/cfed73ee4383caed1568c0183a9906180f01cb00/qhangups/WebEnginePage.py if is_clicked and self.root_url not in ready_url: QtGui.QDesktopServices.openUrl(url) return False def init_gui(application, port=5000, width=300, height=400, if argv is None: argv = sys.argv # Application Level qtapp = QtWidgets.QApplication(argv) webapp.start() # Main Window Level window = QtWidgets.QMainWindow() window.resize(width, height) window.setWindowTitle(window_title) window.setWindowIcon(QtGui.QIcon(icon)) # WebView Level webView = QtWebEngineWidgets.QWebEngineView(window) window.setCentralWidget(webView) # WebPage Level page = WebPage('http://localhost:{}'.format(port)) page.home() webView.setPage(page) window.show() return qtapp.exec_() Back to two classes and a few more lines, but subclassing really feels better here. I also removed assigning the web-view as a window attribute as I don't see the need. Lastly, I added support for user-provided argv (for testing purposes), just in case; not sure if really needed. • My question is, is it worth it to declare a class just to redefine 3 methods, all of them magic methods (ApplicationThread)? Feb 22 '18 at 22:57 • I look at it the other way around. Is it worth it to cram everything into a god method only for the sake of removing classes? To me, the ugly part is how you reassign methods; when subclassing and overriding are way more natural (and I didn't have to invent anything, just reuse the functions you were already using). Feb 22 '18 at 23:22 • You are right, maybe I'm trying too hard to avoid OOP in Python but it seems like it's the Qt style to subclass and redefine methods. Feb 23 '18 at 0:05 • Don't get me wrong, there are times when I think OOP is the wrong tool for the job. But yes, this is Qt, everything is classes already. Feb 23 '18 at 8:47
{}
# Competition Diophantine Equation I am looking for the number of integer solutions to this system of equations $$a^2+b^2+c^2+d^2=2500$$ and $$(a+50)(b+50)=cd$$ I tried moving terms around in the first equation and using the difference of two squares to try and gain some information, but came up empty. I was wondering what the best way to attack this problem is. Should I take the equations mod something, or should I try and place some bounds on the variables and try to derive the number of solutions from that? I ask that y'all only give me hints. Thanks • Hence you are entitled your question as "Competition Diophantine Equation" I want you to clarify something: Are we talking about and ongoing competition or is this task from a former year? – mrtaurho Sep 28 '18 at 18:05 • This problem is from a previous AMATYC, a math competition for two year colleges. – L. Tim Sep 28 '18 at 19:15 Hint: Use the fact $$2cd\leq c^2+d^2$$ and replace $$cd$$ and $$c^2+d^2$$ with a given constrains. Note that \eqalign{(c-d)^2 &= c^2 + d^2 - 2 c d \cr &= 2500 - a^2 - b^2 - 2 (a+50)(b+50)\cr &= -(a+b+50)^2\cr} Since a square must be nonnegative, we must have $$c=d$$ and $$a+b=-50$$. The second equation then becomes $$a^2 + 50 a + c^2 = 0$$. After completing the square, we see that solutions are related to the Gaussian integer factorization of $$625$$. • This is a hint? – Aqua Sep 28 '18 at 20:15 • Sorry, got carried away. – Robert Israel Sep 28 '18 at 20:17 It is well known that by setting $$r_4(n) = \left|\{(a,b,c,d)\in\mathbb{Z}^4: a^2+b^2+c^2+d^2=n\}\right|$$ we have $$r_4(n) = 8\sum_{\substack{d\mid n\\ 4\nmid d}}d$$ for instance through Jacobi's triple product. In particular the number of ways for writing $$n$$ as a sum of four squares is a constant multiple of a multiplicative function, which only depends on the factorization of $$n$$. Since $$2500 = 2^2\cdot 5^4$$ we have $$\begin{eqnarray*} r_4(2500) &=& 8\left(1+5+5^2+5^3+5^4+2+2\cdot 5+2\cdot 5^2+2\cdot 5^3+2\cdot 5^4\right)\\&=&8\cdot 3\cdot\frac{5^5-1}{5-1}=6\cdot(5^5-1)=\color{red}{18744}.\end{eqnarray*}$$ In the given problem we have an extra constraint, and it is a heavy one, since it leads to $$2500 = a^2+b^2+c^2+d^2 = cd-ab-50(a+b),$$ $$(a^2+ab+b^2)+50(a+b)+(c^2-cd+d^2) = 0,$$ and by setting $$a=\frac{A-50}{3},b=\frac{B-50}{3}$$ we get $$(A^2+AB+B^2)+9(c^2-cd+d^2) = 7500,\quad A\equiv B\equiv 2\pmod{3}$$ which clearly has a finite number of solutions, since both $$x^2\pm xy+y^2$$ and $$x^2+9y^2$$ are positive definite quadratic forms. The integers represented by $$x^2\pm xy+y^2$$ are the ones given by a product of primes of the form $$3k+1$$ and squares of primes of the form $$3k+2$$. For a brute-force approach, one may simply list all the solutions of the above equation, then check which ones meet $$a^2+b^2+c^2+d^2=2500$$. • Might writing $a^2\pm ab + b^2 = \frac12(a^2\pm 2ab+b^2+a^2+b^2)=\frac12((a\pm b)^2+a^2+b^2)$ help? – marty cohen Sep 28 '18 at 20:49 Above system of equations is shown below: $$a^2+b^2+c^2+d^2=2500$$ $$(a+50)(b+50)=cd$$ The numerical integer solution to the above is given below: $$(a,b,c,d)=(-25,-25,-25,-25)$$ I solve Robert Israel's equation. This lead pytagorean equation. $$(a+50)^2+c^2=25^2$$ famous examples $$3^2+4^2=5^2$$,$$7^2+24^2=25^2$$,and trivial one $$0^2+25^2=25^2$$ are solutions. Final answers is $$(a,b,c,d)=(-5,-45,±15,±15)(-10,-40,±15,±15)(-18,-32,±24,±24)(-1,-49,±7,±7)(0,-50,0,0)(-50,0,0,0)$$
{}
User alexander chervov - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T19:05:27Z http://mathoverflow.net/feeds/user/10446 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/109706/minimizing-ftx-infty-by-permutation-of-x-i-question-on-fourier-transform Minimizing |FT(X)|_{\infty} by permutation of X_i - question on Fourier transform related to engineering problem (peak factor of OFDM system) Alexander Chervov 2012-10-15T10:38:32Z 2013-05-13T22:22:00Z <p>Consider vector X =( X_1 ... X_N), consider the discrete Fourier transform $Y=F(X)$.</p> <p>I am interested to minimize $|Y|_{\infty}$, by permutation of numbers X_i, how to do it ?</p> <p>Here $|Y|_{\infty}$ is infinity norm of the vector Y i.e. just the maximum of absolute values of components of Y.</p> <hr> <p>More close to life problem is a little more complicated: my numbers X_i are splited at several subsequences such that |X|=const in each subsequence. And I am allowed to make "block" permutations of these subsequences. The goal is the same as to minimize $|Y|_{\infty}$</p> <hr> <p><strong>Background:</strong> roughly speaking the OFDM based ( = most advanced) radio telecommunication systems (LTE, WiMax, new WiFi) make the Fourier transform before transmitting data symbols to the space. Average power is fixed, but people care also about the maximal instant power, which they do not want to be big. Instant power is just the maximal component of the vector.</p> http://mathoverflow.net/questions/95125/applications-of-algebraic-geometry-commutative-algebra-to-biology-pharmacology Applications of algebraic geometry/commutative algebra to biology/pharmacology ? Alexander Chervov 2012-04-25T07:30:46Z 2013-05-12T05:34:25Z <p>Are there applications of algebraic geometry/commutative algebra to biology/pharmacology ?</p> <p>It might be that some Groebner basis technique is used somewhere ? I know there are some applications to robotics - in solving some complicated non-linear equations, may be something similar can happen in biology...</p> <p>PS</p> <p>related questions:</p> <p><a href="http://mathoverflow.net/questions/94907/applications-of-group-theory-to-math-biology-pharmacology" rel="nofollow">http://mathoverflow.net/questions/94907/applications-of-group-theory-to-math-biology-pharmacology</a></p> <p><a href="http://mathoverflow.net/questions/94840/any-applications-integrable-systems-pde-ode-q-to-math-biology-pharmakin" rel="nofollow">http://mathoverflow.net/questions/94840/any-applications-integrable-systems-pde-ode-q-to-math-biology-pharmakin</a></p> <p><a href="http://mathoverflow.net/questions/94931/graphical-models-and-gene-finding-and-diagnosis-of-diseases" rel="nofollow">http://mathoverflow.net/questions/94931/graphical-models-and-gene-finding-and-diagnosis-of-diseases</a></p> <p><a href="http://mathoverflow.net/questions/95065/applications-of-the-knot-theory-to-biology-pharmacology" rel="nofollow">http://mathoverflow.net/questions/95065/applications-of-the-knot-theory-to-biology-pharmacology</a></p> http://mathoverflow.net/questions/112715/why-when-classification-of-simple-objects-is-simple-e-g-unknown-classifica Why/when classification of simple objects is "simple" ? E.g. (unknown) classification of simple Lie algebras in char =2,3... Alexander Chervov 2012-11-17T20:19:12Z 2013-05-10T15:49:21Z <p>Classification of simple finite-dim Lie algebras for char >=5 has been accomplished not so long time ago, and char p=2,3 is open problem.</p> <p>I wonder what is known/expected for char p=2,3 ?</p> <p>More vague and soft question is the following - look at some famous classification problems: simple finite-dim Lie algebras, simple finite groups, some other things classified by ADE... We see the following pattern: there are some series of objects and finite number of "sporadic" objects. I.e. it never happens that there is infinite number of examples which are not in "series". So classification of simple objects is simple (in some very informal sense).</p> <p>The question: can we expect this in advance, without obtaining classification ? (What are other examples/counter examples of similar phenomenon ?).</p> <p>For example can we expect/prove this for simple Lie algs for char =2,3 ? I.e. there will be some finite number of series and finite number of "sporadic" examples ?</p> http://mathoverflow.net/questions/77434/convergence-speed-of-jacobi-eigenvalue-algorithm-for-parallel-orderingbrent-luk Convergence speed of Jacobi eigenvalue algorithm for parallel ordering(Brent-Luk) ? Alexander Chervov 2011-10-07T08:05:27Z 2013-05-03T13:22:00Z <p>Is there estimate for convergence of the Jacobi eigenvalue algorithm for Hermitian matrices for "parallel ordring" (Brent-Luk ordering (see comment below)) ?</p> <hr> <p>For example for 4 4 matrices parallel ordering is the following 1a) 12 1b) 34 2a) 23 2b) 14 3a) 13 3b) 24</p> <hr> <p>[EDIT] Moreover convergence itself is not known for such ordering even in 4x4 case. I have consulted with many experts in the field - it is not proved. Numerical simulations (checked more than 10^10 matrices of different form) shows that the convergence exists.</p> <p>There is certain subtlety in the definition of method. Which lead some authors to claim that there is NO convergence. But actually counter-example is not for "reasonable" implementation of details.</p> <p>The detail is the following consider 2x2 matrix such that diagonal elements are equal. Then the rotation can be either +45 either -45 - no unique choice. What the authors claim that if we have a freedom to choose +45 or -45 by our own wish, in each step where ambiguity occurs - then there will be counterexample ! However this counter-example does NOT work if we fix +45 (or -45) once and forever ! I.e. in the case of ambiguity we ALWAYS choose angle to be the same. Simulations shows - that than there is no problem.</p> <p>I spent about 2 weeks trying to prove this just in the 4x4 example - but I was unable to prove it. The difficulty is that we need to analyse about 3-4 sweeps. It can be shown that there always exists a matrix that can be arbitrary "BAD" after 1-2 sweeps...</p> <p>[END of EDIT on 21 Jan. 2012]</p> <hr> <p>As far as I can expect that there should be ultimate quadratic convergence [EDIT] actually as works of Walter F. Mascarenhas suggests their will be cubic ultimate convergence[EDIT] but I am interested at the first iteration - they should be at most linear convergence, but it is not clear for is there uniform convergence speed or there can be some matrices where convergence can be arbitrary bad ? (From simulation we see that probably there is NO bad examples - convergence seems rather fast, but there are certain difficulties in proving this theoretically).</p> <p>Actually even the convergence for arbirary ordering is not clear for me.</p> <p>Paper by Walter Mascarenhas:</p> <p>SIAM. J. Matrix Anal. &amp; Appl. 16, pp. 1197-1209 (13 pages) On the Convergence of the Jacobi Method for Arbitrary Orderings Walter F. Mascarenhas States only convergence of the diagonal elements. Non-diagonal elements may not converge, for some sophisticated orderings. He constructed examples in his PhD at MIT unpublished (private communication from him)</p> http://mathoverflow.net/questions/128997/mathematical-properties-of-financial-prices Mathematical properties of financial prices Alexander Chervov 2013-04-28T12:21:10Z 2013-04-30T03:54:50Z <p>Prices of financial assets (stock-market prices or currency exchange rates) obviously resemble trajectories of stochastic processes.</p> <p>What is known about their mathematical properties ? </p> <p>I know there is huge (too huge) literature (see e.g. <a href="http://mathoverflow.net/questions/119713/financial-mathematics-books" rel="nofollow">MO-Financial Mathematics Books</a>) around it, I am familiar with some ideas, like below, but would be grateful for any comments/suggestions.</p> <p>1) The individual distributions are better modelled by heavy-tailed distributions, rather than by normal distribution, reflecting that sometimes prices change heavily in short time periods. (See e.g. <a href="http://mathoverflow.net/questions/54007/is-there-any-straightforward-way-to-substitute-for-gaussian-brownian-assumptions" rel="nofollow">MO: Is there any straightforward way to substitute for Gaussian/Brownian assumptions in financial mathematics?</a>).</p> <p>2) To some extent they are similar to Brownian (log-Brownian) motion, more precisely the increments are independent at least at some time scales (which means that you cannot win money), (however there are other claims that short time increments are correlated). </p> <p>3) There are claims that fractal (Hausdorff ) dimension is near to 1.5 (the same as for Brownian motion). </p> http://mathoverflow.net/questions/127010/classification-for-coadjoint-orbits-of-lower-or-upper-triangular-matrices/127064#127064 Answer by Alexander Chervov for classification for coadjoint orbits of lower or upper triangular matrices Alexander Chervov 2013-04-10T09:14:56Z 2013-04-10T09:14:56Z <p>Let me first say, that I am not an expert, but was interested in the same question recently, so I would also be happy if someone provides more info on the question. Let me collect some facts, which I know.</p> <p>1) As far as I understand the general classification of orbits is in certain sense "wild" problem. </p> <p>2) Classification up to n=7 can found in <a href="http://arxiv.org/abs/math/0603649" rel="nofollow">Coadjoint orbits of the group $UT(7,K)$ (2006) </a>, further papers by A.Panov and his students give partial results on general $n$, e.g. these ones <a href="http://arxiv.org/abs/0801.3022" rel="nofollow">Involutions in $S_n$ and associated coadjoint orbits</a> <a href="http://arxiv.org/abs/0902.4584" rel="nofollow">Diagram method in research on coadjoint orbits (2009) </a></p> <p>3) There is a lots of recents studies which are "related" to the question. Especially in the case of ground field is finite. In such a case people are greatly interested in understanding representation theory of U(n,F_q) and in particular of the "orbit method" approach to it, and hence in coadjoint orbits. See some comments at mathoverflow question: <a href="http://mathoverflow.net/questions/126932/finite-unipotent-groups-references/126982#126982" rel="nofollow">Finite Unipotent Groups: References</a>. </p> <p>4) If you restrict to ground field to be finite, then it is worth to mention several facts: a) number of adjoint and coadjoint orbits is the same b) it is the same with the number of conjugacy classes in the group c) hence the same as number of irreps d) it is related to interesting combinatorics, see e.g. paper by <a href="http://www.emis.ams.org/journals/SC/1997/2/pdf/smf_sem-cong_2_35-42.pdf" rel="nofollow">A.A. Kirillov, A. Melnikov On a Remarkable Sequence of Polynomials</a> and other papers by these and other authors. </p> <p>For the finite field, these MO questions, related: <a href="http://mathoverflow.net/questions/126932/finite-unipotent-groups-references/126982#126982" rel="nofollow">Finite Unipotent Groups: References</a>, <a href="http://mathoverflow.net/questions/106521/representation-theory-of-p-groups-in-particular-upper-tringular-matrices-over-f-p" rel="nofollow">Representation theory of p-groups in particular upper tringular matrices over F_p</a> <a href="http://mathoverflow.net/questions/68207/irreducible-representations-of-the-unitriangular-group" rel="nofollow">Irreducible representations of the unitriangular group</a>.</p> http://mathoverflow.net/questions/126932/finite-unipotent-groups-references/126982#126982 Answer by Alexander Chervov for Finite Unipotent Groups: References Alexander Chervov 2013-04-09T14:03:06Z 2013-04-09T14:09:33Z <p>I guess U(n,F_q) - upper triangular matrices over F_q. I am not an expert in the field, but was recently interested in similar question, so I'll put these remarks.</p> <p>There is certain amount of quite recent works related to representations of U(n,F_q) and some "big names" involved. There are certain conjectures which are easy to state, but hard to prove, and some more conceptual problems and recent breakthroughs.</p> <p>One of the origins of the modern interest are papers by A.A. Kirillov (1995-2005), e.g. <a href="http://books.google.ru/books?hl=ru&amp;lr=&amp;id=FmtztRqFps0C&amp;oi=fnd&amp;pg=PA43&amp;dq=Kirillov+variation+on+triangular+theme&amp;ots=jIWQOzr99R&amp;sig=2POfCy5i9KPqOd2Yrh_Ds_RNKgg&amp;redir_esc=y#v=onepage&amp;q=Kirillov%2520variation%2520on%2520triangular%2520theme&amp;f=false" rel="nofollow">Variation on a triangular theme</a>, <a href="http://link.springer.com/chapter/10.1007/978-1-4612-0029-1_11" rel="nofollow">Two more variations on a triangular theme</a>, where he considered a question whether the "orbit method" can be extended to finite Lie groups such as U(n,F_q). (Originally (60-ies) A.A. Kirillov proposed "orbit method" for Lie groups over R, and U(n,R) where the first groups where he demonstrated its work).</p> <p>Kirillov's papers are always pleasure to read, he puts the accent not on technical details, but on new ideas, problems and insighting observations. The moral that in certain cases "orbit method" can be applied, but there some problems, which are deserved to be further studied to achieve further progress. See e.g. Vipul Naik's site, where one can find lots of interesting and understandable information, worked out examples and further references: <a href="http://groupprops.subwiki.org/wiki/Linear_representation_theory_of_unitriangular_matrix_group%3AUT%283,p%29" rel="nofollow">orbit method for U(3,p)</a>, <a href="http://groupprops.subwiki.org/wiki/Kirillov_orbit_method_for_finite_Lazard_Lie_group" rel="nofollow">orbit method for finite Lazard groups</a>.</p> <hr> <p>One of the major breakthroughs in the subject is an idea of "supercharacters". Let me quote from <a href="http://math.ucsd.edu/~eariasca/papers/UniRW.pdf" rel="nofollow">E. Arias-Castro P. Diaconis R. Stanley</a></p> <blockquote> <p>The character theory of U(n,F_q) is a well known nightmare. In recent work, Carlos Andre, Roger Carter and Ning Yan have developed a theory based on certain unions of conjugacy classes (here called super-classes) and sums of irreducible characters (here called super-characters). </p> </blockquote> <p>The main point is that "classification of characters" is to certain extent "wild" problem, while "supercharacters" are quite manageable to classify. </p> <p>See more comments in <a href="http://mathoverflow.net/questions/68207/irreducible-representations-of-the-unitriangular-group" rel="nofollow">MO question- "Irreducible representations of the unitriangular group" </a>.</p> <hr> <p>Let me also quote from F. Ladisch answer on <a href="http://mathoverflow.net/questions/106521/representation-theory-of-p-groups-in-particular-upper-tringular-matrices-over-f-p" rel="nofollow">MO question - "Representation theory of p-groups in particular upper triangular matrices over F_p"</a></p> <blockquote> <p>There are, by now, many papers about the character theory of the upper triangular group and related topics, which is in part motivated by Higman's conjecture that for every n, the number of conjugacy classes of Un(Fq) is a polynomial in q with integer coefficients.</p> </blockquote> <hr> <p>Let me also mention <a href="http://mathoverflow.net/questions/39606" rel="nofollow">"23 page article with 28 authors :)"</a> which is devoted to the subject: <a href="http://arxiv.org/abs/1009.4134" rel="nofollow">Supercharacters, symmetric functions in noncommuting variables, and related Hopf algebras</a></p> http://mathoverflow.net/questions/125840/a-direct-proof-of-the-harer-zagier-recursion-enumerating-the-ways-to-paste-a-2n-g/125847#125847 Answer by Alexander Chervov for A direct proof of the Harer-Zagier recursion enumerating the ways to paste a 2n-gon to get a genus g surface? Alexander Chervov 2013-03-28T18:38:17Z 2013-03-30T00:30:22Z <p><a href="http://arxiv.org/abs/0712.2448" rel="nofollow">http://arxiv.org/abs/0712.2448</a></p> <p>Gluing of Surfaces with Polygonal Boundaries E. T. Akhmedov, Sh. Shakirov</p> <p>By pairwise gluing of edges of a polygon, o produces two-dimensional surfaces with handles and boundaries. In this paper, we count the number ${\cal N}_{g,L}(n_1, n_2, n_L)$ of different ways to produce a surfac of given genus $g$ with $L$ polygonal boundaries with given numbers of edges $n_1, n_2, >..., n_L$. Using combinatorial relations between graphs on real two-dimensional surfaces, we derive recursive relations between $\cal N_{g,L}$. We show that Harer-Zagier numbers appear as a particular case of ${\cal N}_{g,L}$ and deriv a new explicit expression for them. Comments: 7 pages, 9 figures</p> <p>It seems proposes quite elementary proof. The key idea that they found some generalization which is more easy to prove.</p> http://mathoverflow.net/questions/110855/product-of-conjugacy-classes-is-there-an-analog-of-tanaka-krein-reconstruction Product of conjugacy classes - is there an analog of Tanaka-Krein reconstruction ? Alexander Chervov 2012-10-27T20:31:16Z 2013-03-20T12:02:17Z <p>Consider a finite group G. The product of conjugacy classes can be defined in natural way just by multiplying the representatives and counting multiplicities (see e.g. <a href="http://mathoverflow.net/questions/62088/products-of-conjugacy-classes-in-s-n" rel="nofollow">MO 62088</a>). So we get ring with a basis and structure constants are natural numbers. Similar to what one has for product of irreps. There are many analogies between conjugacy classes and irreps in particular see <a href="http://journals.cambridge.org/action/displayFulltext?type=1&amp;fid=3077776&amp;jid=PEM&amp;volumeId=30&amp;issueId=01&amp;aid=3077768&amp;bodyId=&amp;membershipNumber=&amp;societyETOCSession=" rel="nofollow">this article</a>.</p> <p>Tanaka-Krein duality states that group can be reconstructed from the tensor category of its representations which is semisimple for finite groups, and hence carries the same information as ring + basis of irreps.</p> <p><strong>Question:</strong> Can one reconstruct a group having (ring + basis) made of conjugacy classes ?</p> <p>If not - what partial information (e.g. character table) one can get ? </p> <hr> <p><strong>Question:</strong> Is there any relation between this ring and ring of irreps of the same group ? or may be some other group ?</p> <p>(Remark. For abelian group they are isomorphic.)</p> <p><strong>Question:</strong> Are there any further analogies between ring of irreps and conjugacy classes except mentioned in the paper cited above ?</p> http://mathoverflow.net/questions/123796/role-of-applications-in-modern-mathematics Role of applications in modern mathematics Alexander Chervov 2013-03-06T18:12:13Z 2013-03-07T13:11:42Z <p>Older days scientists were universalists and philosophy, physics and mathematics were a part the same question - understanding the world. Nowadays one may get feeling that the role of applications in development of modern mathematics is negligible - of course it depends on the field. And aim of the question is to get different opinions from different points of view. </p> <p><strong>Question 1</strong> What is the role of applications in modern mathematics ?</p> <p><strong>Question 2</strong> Different countries have different mechanisms to stimulate interaction between mathematics and applications - what are these mechanisms and what are their advantages and disadvantages ? </p> <p><strong>Question 3 (for pure mathematicians)</strong> What is your personal stance on applications ? Is it out of your scope of interests or you are have (trying to have) some contact with applications ?</p> http://mathoverflow.net/questions/101169/not-especially-famous-long-open-problems-which-higher-mathematics-beginners-can/101180#101180 Answer by Alexander Chervov for Not especially famous, long-open problems which higher mathematics beginners can understand Alexander Chervov 2012-07-02T21:04:50Z 2013-02-22T07:31:33Z <p>The <strong>Hot spot conjecture</strong> The conjecture seems quite amazing and simple to formulate, it can be even understood by persons "from the street" seems its prediction can be tested experimentally. It is a subject of <a href="http://michaelnielsen.org/polymath1/index.php?title=The_hot_spots_conjecture" rel="nofollow">"polymath project 7"</a>. Let me quote:</p> <blockquote> <p>The hotspots conjecture can be expressed in simple English as:</p> <p><strong>Suppose a flat piece of metal, represented by a two-dimensional bounded connected domain, is given an initial heat distribution which then flows throughout the metal. Assuming the metal is insulated (i.e. no heat escapes from the piece of metal), then given enough time, the hottest point on the metal will lie on its boundary.</strong></p> <p>In mathematical terms, we consider a two-dimensional bounded connected domain D and let u(x,t) (the heat at point x at time t) satisfy the heat equation with Neumann boundary conditions. We then conjecture that</p> <p><strong>For sufficiently large t > 0, u(x,t) achieves its maximum on the boundary of D</strong></p> <p>This conjecture has been proven for some domains and proven to be false for others. In particular it has been proven to be true for obtuse and right triangles, but the case of an acute triangle remains open. The proposal is that we prove the Hot Spots conjecture for acute triangles! Note: strictly speaking, the conjecture is only believed to hold for generic solutions to the heat equation. As such, the conjecture is then equivalent to the assertion that the generic eigenvectors of the second eigenvalue of the Laplacian attain their maximum on the boundary. A stronger version of the conjecture asserts that</p> <p>For all non-equilateral acute triangles, the second Neumann eigenvalue is simple; and The second Neumann eigenfunction attains its extrema only at the boundary of the triangle.</p> <p>(In fact, it appears numerically that for acute triangles, the second eigenfunction only attains its maximum on the vertices of the longest side.)</p> </blockquote> <p>==========================================================</p> <p>May be this problem can be mentioned when teaching determinants and in particular: </p> <p>$\det(AB)= \det(A)\det(B).$</p> <p>There are so-called <a href="http://en.wikipedia.org/wiki/Capelli%27s_identity" rel="nofollow">Capelli identities</a> which generalize this formula for specific matrices with non-commutative entries. In the paper <a href="http://arxiv.org/abs/0809.3516" rel="nofollow">Noncommutative determinants, Cauchy-Binet formulae, and Capelli-type identities</a> by Sergio Caracciolo, Andrea Sportiello, Alan D. Sokal they formulate certain conjectures of the type $$\det(A)\det(B)=\det(AB+\text{correction})$$ on the page 36 (bottom), conjectures 5.1, 5.2.</p> <p>I think these are quite non-trivial, but probably some smart young mathematician may solve them, given some amount of time (some months may be). I spent some amount of time thinking on them without success, and moreover let me mention that D. Zeileberger and D. Foata also failed to find a combinatorial proof of the Capelli identity of very similar type -- the one proved by Kostant-Sahi and Howe-Umeda -- see their comments in <a href="http://arxiv.org/abs/math/9309212" rel="nofollow">Combinatorial Proofs of Capelli's and Turnbull's Identities from Classical Invariant Theory</a> page 9 bottom: "Although we are unable to prove the above identity combinatorially ... ". So words above are some idications of non-triviality of the conjectures.</p> <p>Personally I am quite interested in a proof, probably it can give clue for further generalizations.</p> http://mathoverflow.net/questions/121871/algorithm-to-solve-sokoban-like-game-on-graphs-move-chips-from-one-set-of-verti Algorithm to solve Sokoban-like game on graphs - move chips from one set of vertices to another Alexander Chervov 2013-02-15T07:03:37Z 2013-02-20T22:30:37Z <p>The question is close to the <a href="http://en.wikipedia.org/wiki/Sokoban" rel="nofollow">Sokoban</a> game (thanks to Dima Pasechnik !), but a little different in details.</p> <p>Consider a directed graph (multi-graph). Consider some set of marked chips (chip1, chipe2,..., chipM). Put chips on some set of vertices 'Init1','Init2','Init3'... And consider some other set of vertices 'Final1','Final2',..., 'FinalM'.</p> <p><strong>Question</strong> Propose an "efficient" algorithm which will determine is it possible to "MOVE" chips from positions "InitNN" to positions 'FinalNN'.</p> <p>Where we are allowed to "MOVE" chip from a vertex to an outgoing edge and from incoming edge to corresponding vertex. With the CONSTRAINT that two chips are NOT allowed to be at the same place. One move - moves only ONE chip. ChipK should go to position FinalK - same "K".</p> <p><strong>Question</strong> There can be many approaches to solve the problem, I am interested in analysis their complexity. Any ideas are welcome. For example if graph is "random" in certain sense what can be the algorithm the least average complexity ? </p> <p>Where complexity is counted in number of operations (write a C-code (I actually wrote a Matlab code), compile to and calculate the number of cycles - this is well-defined complexity measure, different compilers and CPU will give approximately same result). </p> <p><strong>Example of algorithm</strong> It seems the simplest way to solve a problem is the following. Essentially it can be reduced to determining where two vertices are connected in some bigger graph, which in turn can be solved by "breadth-first search" ("wave algorithm" in Russian) (I mean let us enumerate all possible chip configurations - it will give vertices of the "new graph". Let us connect two vertices (configurations) if there is a "MOVE" which goes form one to another.) By "breadth-first search" ("wave algorithm" in Russian) I mean the following - take an initial vertex and find all connected to it; next step find all vertices connected to vertices found on the previous step; and so on....</p> <p><strong>Question</strong> What about efficiency of this algorithm ? Can one propose better ?</p> http://mathoverflow.net/questions/101644/fiction-books-about-mathematicians Fiction books about mathematicians? Alexander Chervov 2012-07-08T11:16:59Z 2013-02-19T13:38:40Z <p>What are some fiction books about mathematicians? </p> <p>It seems to me rather difficult for writers to create good books on this subject. Some years ago I thought there were no such books at all. There are many reasons: it is difficult to describe the process of discovery and describe it in the exciting way. The subject has narrow audience and not the way to make best-seller...</p> <p>Comments on how authors try to avoid these problems are also welcome. The movie "A Beautiful Mind" is a (beautiful for me) example, where the story of mathematician was mixed with love and spy stories to make it interesting for general audience, well not so much preserved from mathematician's story, but nevertheless I am quite positive about it.</p> <p>Here is a related MO question:</p> <p><a href="http://mathoverflow.net/questions/77279/movies-about-mathematics-mathematicians-closed" rel="nofollow">Movies about mathematics mathematicians</a></p> http://mathoverflow.net/questions/122180/lie-algebra-embeddings-and-the-center-of-their-enveloping-algrabras/122212#122212 Answer by Alexander Chervov for Lie algebra embeddings and the center of their enveloping algrabras Alexander Chervov 2013-02-18T18:45:11Z 2013-02-18T18:45:11Z <p>Take a look at "Shifted Schur Functions"</p> <p>Andrei Okounkov, Grigori Olshanski <a href="http://arxiv.org/abs/q-alg/9605042" rel="nofollow">http://arxiv.org/abs/q-alg/9605042</a></p> <p>Section 10: "Coherence property of quantum immanants and shifted Schur polynomials"</p> <p>In particular formulas 10.4, 10.5 - they discuss "averaging operators" Z(U(gl(n)) -> ZU(gl(N)) , n &lt; N</p> <p>and later prove certain "good" (coherence) property of special generators of the centers Z(U(gl(k)) which has been studied by the authors and M. Nazarov.</p> <p>Hope this helps...</p> <p>What is very interesting for me personally - is try to generalize such things to the case of loop algebras Z(U(\hat gl)). Here certain "good" elements of the centers has been constructed by <a href="http://arxiv.org/abs/0711.2236" rel="nofollow">Talalaev's formula</a>, it is natural to expect that Okounkov-Olshanski-... story can be generalized to loop algebra case</p> http://mathoverflow.net/questions/121162/math-behind-databases-management-and-sql Math behind databases management and SQL ? Alexander Chervov 2013-02-08T08:32:34Z 2013-02-18T17:53:27Z <p>Are there some mathematical theories/theorems/... behind modern development of database management systems and in particular of <a href="http://en.wikipedia.org/wiki/SQL" rel="nofollow">SQL</a> ?</p> <p>I am refreshing my knowledge of these things which are quite down-to-earth "how to use" (create table ..., insert..., select * from ...), but I think some deeper understanding what is behind would be helpful. </p> <p>In particular in Wikipedia one may find some relations with 3-valued logic: </p> <blockquote> <p>Along with True and False, the Unknown resulting from direct comparisons with Null thus brings a fragment of three-valued logic to SQL. The truth tables SQL uses for AND, OR, and NOT correspond to a common fragment of the Kleene and Lukasiewicz three-valued logic (which differ in their definition of implication, however SQL defines no such operation).</p> </blockquote> <p>But it is not very clear for me what it means and how deep it is ?</p> http://mathoverflow.net/questions/122071/construction-of-one-graph-from-another-known-sokoban-graph-chips-configur Construction of one graph from another - known ? ("Sokoban graph", chips configurations) Alexander Chervov 2013-02-17T15:40:20Z 2013-02-17T15:40:20Z <p>Consider a [directed] graph and natural number "m". Let us construct new [directed] graph[s] from it as follows. There is one idea behind the construction, but one can play with details and get several constructions. </p> <p><strong>Vertices of new graph</strong> - all possible configurations of "m" ([non]-marked) chips on vertices (edges) of original graph [with/without] constraint that two chips should be in different positions.</p> <p><strong>Edge of new graph</strong> - two vertices are connected if one configuration of chips can be obtained from another in one "MOVE".</p> <p>Where by "MOVE" I mean that we can move ONE chip from vertex of original graph to neighboring (edge)-vertex of original graph.</p> <p><strong>Question</strong> Is this construction(s) well-known ? what is name/refrence ? How properties of the original graph are related to the one of the new graph ? </p> <p><strong>Motivation:</strong> Such graph appears if one thinks on the question: </p> <p><a href="http://mathoverflow.net/questions/121871/algorithm-to-solve-sokoban-like-game-on-graphs-move-chips-from-one-set-of-verti" rel="nofollow">http://mathoverflow.net/questions/121871/algorithm-to-solve-sokoban-like-game-on-graphs-move-chips-from-one-set-of-verti</a></p> <p>The solution to question above is related considering the construction above and searching if two vertices of the new graph are connected. </p> <p>Roland Bacher in his answer describes similar idea which is directly related to sokoban game.</p> <p>See also: </p> <p><a href="http://mathoverflow.net/questions/122058/path-search-algorithms-on-graphs" rel="nofollow">http://mathoverflow.net/questions/122058/path-search-algorithms-on-graphs</a></p> http://mathoverflow.net/questions/122058/path-search-algorithms-on-graphs Path search algorithms on graphs Alexander Chervov 2013-02-17T12:46:40Z 2013-02-17T12:46:40Z <p>Consider a directed graph and two vertices on it. I need to determine is there a path between them. There is a "breadth-first search" ("wave algorithm" in Russian) algorithm (see description below).</p> <p><strong>Question</strong> What are the alternatives and is it known what kind of algorithm has less complexity on some specific types of graphs, e.g. random graphs, "sokoban-graphs" (see below) ?</p> <p>Roughly speaking breadth-first algorithm - look at ALL paths outgoing from "A" of length 1, next step of length 2, next step length 3, ...</p> <p>It has clear intuitive disadvantage if graph is "very connected" - we are looking for too many "short paths" - it would be better to take one "long path" which goes from vertex "A" "somewhere near" to destination "B", and then find path from "somewhere nearby B" to "B" by "breadth-first" search. Of course, here we should somehow be able to explain what means "somewhere near" and propose a strategy to find a path from "A" to it. For some classes of graphs - <a href="http://en.wikipedia.org/wiki/Convolutional_code#Trellis_diagram" rel="nofollow">"trellis graphs"</a> this is clear what means, I do not know in general. </p> <p><strong>Motivation</strong> comes from the question </p> <p><a href="http://mathoverflow.net/questions/121871/algorithm-to-solve-sokoban-like-game-on-graphs-move-chips-from-one-set-of-verti" rel="nofollow">http://mathoverflow.net/questions/121871/algorithm-to-solve-sokoban-like-game-on-graphs-move-chips-from-one-set-of-verti</a></p> <p>The problem about chip movement or sokaban like problem can be reduced to the question of existence of the path between the two vertexes. However the graph appearing here is quite specific - vertices of the "big-new-graph" are configurations of chips on the original graph and they are connected if there is a "MOVE" from one configuration to another. </p> <p>So taking these specific properties of that graph what algorithm should one use to settle the path existence problem. </p> http://mathoverflow.net/questions/120612/trichotomies-in-mathematics/121505#121505 Answer by Alexander Chervov for Trichotomies in mathematics Alexander Chervov 2013-02-11T17:07:44Z 2013-02-11T17:07:44Z <p>Let me point out that Vladimir Arnold was quite interested in similar question. He called subj. "mathematical trinities", see e.g. his paper <a href="http://www.maths.ed.ac.uk/~aar/papers/arnold4.pdf" rel="nofollow">"Symplectization, Complexification and Mathematical Trinities"</a>. As far as I remember from his lectures, his ideas were that many of these "trinities" are actually related to each other; and he also considered subj. as a tool to invent to theories: see question marks at already cited "Arnold's table": <a href="http://concretenonsense.files.wordpress.com/2008/11/arnoldtable.jpg" rel="nofollow">jpg</a>.</p> <hr> <p>Let me also mention some "trinities" which occur in my own research related to <a href="http://en.wikipedia.org/wiki/Capelli%27s_identity" rel="nofollow">Capelli identities</a> (which are some non-commutative analogs of det(AB)=det(A)det(B) ).</p> <p>Matrix trinity - a) generic b) symmetric c) antisymmetric</p> <p>Here how it goes in Capelli (and related Cayley) identities: </p> <p>a) generic matrices - original Capelli identity has been discovered by Capelli in 19-th century - it is for "generic matrices" $A=x_{ij}$ $B = \partial_{ji}$ </p> <p>b) <a href="http://en.wikipedia.org/wiki/Capelli%27s_identity#Turnbull.27s_identity_for_symmetric_matrices" rel="nofollow">symmetric matrices - analog of the Capelli identity</a> has been discovered by Turnbull around 1940-ies - here $A=(x_{ij}+x_{ji})$ $B= \partial_{ji} + \partial_{ij}$.</p> <p>c) <a href="http://en.wikipedia.org/wiki/Capelli%27s_identity#The_Howe.E2.80.93Umeda.E2.80.93Kostant.E2.80.93Sahi_identity_for_antisymmetric_matrices" rel="nofollow">antisymmetric matrices - analog</a> has been found by Howe-Umeda and Kostant-Sahi around 1990, here $A=(x_{ij}-x_{ji})$ $B= \partial_{ji} - \partial_{ij}$.</p> <p>Similar generalization were found for Cayley identity respectively: a) attributed to Cayley b) Garding 1948 c) Shimura 1984 - see <a href="http://arxiv.org/abs/1105.6270" rel="nofollow">arXiv:1105.6270</a> for quite a complete information.</p> <p><strong>My question</strong>: is it really trinity ? Or you can propose some analogs of Cayley-Capelli for some other matrices, say "symplectic" ? </p> <hr> <p>It is might be strange, but other trinities like R,C,H also appears in the Capelli story - and they give different identities. Moreover trinities can be combined and we might get trinity^trinity^trinity...</p> <p>Actually H-analog of the Capelli identity is not fully known for the momemnt - only analog for 1x1 matrices has been discovered quite recently by student of R. Borcherds, <a href="http://arxiv.org/abs/1102.2657" rel="nofollow">An Huang</a>. Looking at this example I proposed some C-analogs of <a href="http://arxiv.org/abs/1203.5759" rel="nofollow">Capelli identities</a>. Actually all generic/symmetric/antisymmetric can be complexified, hopefully there should exist quaternionic analogs and thus we might have trinity^trinity. Some partial results of trinity^trinity spirit for Cayley identity contained in loc. cit.</p> <p>There are certain analogs of <a href="http://arxiv.org/abs/q-alg/9712021" rel="nofollow">Capelli identities for classical Lie algebras</a>: this can seen as gl/so/su trinity, well probably it is not the trinity in some strict sense. I have no idea can we have something like trinity^trinity^trinity ...</p> http://mathoverflow.net/questions/101471/what-is-matrix-a-such-that-hamming-weight-of-x-ax-is-maximal-min-distance What is matrix A such that Hamming weight of [x, Ax] is maximal ? (Min distance of 1/2 block code?) Alexander Chervov 2012-07-06T09:32:07Z 2013-02-10T18:28:29Z <p>Everything over F_2. </p> <p>For any matrix $A$ define the number $N(A) = min_{x}$ <a href="http://en.wikipedia.org/wiki/Hamming_weight" rel="nofollow">HammingWeight</a> $( [x , Ax])$. Where $x$ is vector and [a,b] is just concatenation of vectors: (a_1,...a_n, b_1,...,b_m).</p> <p><strong>Question</strong> What is $max_{A \in Mat(n,m) } (N(A))$ ? </p> <p>Particular case n=m. </p> <hr> <p>Motivation.</p> <p>The map $x \to [x, Ax]$ can be considered as error-correcting coding, $x$ - information bits, $Ax$ are redundancy bits.</p> <p>The code is good if distance between codewords is small.</p> <p>Reformulation of question: what is the "best possible" code of type above ? ("best possible" in the sense of minimal distance -- it is not always "best" from practical point of view nevertheless).</p> http://mathoverflow.net/questions/87053/papers-archives-especially-not-indexed-by-google papers archives? (especially not indexed by google) Alexander Chervov 2012-01-30T18:18:57Z 2013-02-07T11:24:07Z <p><a href="http://www.digizeitschriften.de/index.php?id=239&amp;L=2" rel="nofollow">http://www.digizeitschriften.de/index.php?id=239&amp;L=2</a> has many papers with free access (e.g. Inventiones Mathematicae) but when you search with scholar.google.com it does not index this site!</p> <p>Are there any other archives like this? </p> <p>Just in case let me list other archives (they are indexed by google as far as I understand).</p> <p><a href="http://projecteuclid.org" rel="nofollow">http://projecteuclid.org</a> </p> <p><a href="http://www.numdam.org/?lang=fr" rel="nofollow">http://www.numdam.org/?lang=fr</a></p> <p><a href="http://www.math.uiuc.edu/K-theory/" rel="nofollow">http://www.math.uiuc.edu/K-theory/</a></p> <p>PS</p> <p>e.g. I cannot find:</p> <p>Koszul, J (1981), "Les algebres de Lie graduées de type sl (n, 1) et l'opérateur de A. Capelli", C.R. Acad. Sci. Paris (292): 139-141</p> <p>Does it mean search skills are poor or it is really not available electronically? </p> http://mathoverflow.net/questions/121033/classical-limit-and-drinfelds-realization-of-quantum-groups/121041#121041 Answer by Alexander Chervov for Classical limit and Drinfelds realization of quantum groups Alexander Chervov 2013-02-07T06:09:53Z 2013-02-07T06:09:53Z <p>It is more like comment, but seems too long.</p> <p>I would say yes. I cannot give precise reference, but by all the idealogy it is yes. Or you see some "underwater stones" - problems ? </p> <p>Ideas are like this: If you start with RLL=LLR description, then you need to make "Gauss" or "tringular" decomposition of L = LowTriangular*D*UpperTrianular to extract Drinfeld's currents. </p> <p>In the classical limit "everything" takes the form A = A_{cl} + O(h). Now if you take L = L_{cl} , Triangular = Triangular_{cla} + O(h)</p> <p><strong>The simple fact that should hint that the "yes" answer is the following.</strong> So the decomponsition L = LowTriangular*D*UpperTrianular in classical limit corresponds to L_{cl} = LowTriangular_{cl} + D_{cl} + UpperTrianular_{cl} </p> <p>You see multiplication in the first order corresponds to addition. And this means that corresponding classical currents are just currents to appropriate upper-lower triangular parts - which corresponds to x(z).</p> http://mathoverflow.net/questions/119991/is-the-quantum-algebra-unique-up-to-isomorphism-in-deformation-quantization Is the quantum algebra unique (up to isomorphism) in deformation quantization ? Alexander Chervov 2013-01-27T05:54:09Z 2013-01-30T07:43:53Z <p>Consider a Poisson algebra A (i.e. commutative algebra with Poisson bracket).</p> <p>Let $\hat A$ be a deformation quantization of the algebra A. We know that construction of deformation quantization and more generally "<a href="http://arxiv.org/abs/q-alg/9709040" rel="nofollow">formality isomorphism of Kontsevich</a>" depends on choices of certain data (in Kontsevich approach we can change "propagator" and coordinates on manifold, in <a href="http://arxiv.org/abs/math/9803025" rel="nofollow">Tamarakin's approach</a> we can choose arbitrary associator.)</p> <p><strong>Question</strong> Is it known/true/expected that for different choices of "that datum" we nevertheless obtain isomorphic quantum algebras $\hat A$? </p> <p>May be one needs certain restrictions on setup (e.g. only smooth algebras A, only "generic" quantization morphism, whatever...) to guarantee uniqueness ? </p> <p><a href="http://arxiv.org/abs/math/9904055" rel="nofollow">Kontsevich also mentions</a> that Grothendieck-Teichmuller group should act on the set of all deformation quantizations. Is it at least true that two quantizations living in the same orbit of that group give isomorphic quantum algebras ? </p> <p><strong>Question in formally precise form</strong> Consider a Poisson algebra. Choose two different formality isomorphisms (e.g. with different propagators or associators). </p> <p>Define two star-products $\star$ and $\star'$ with the help of these two formality isomorphisms. </p> <p><strong>Question</strong> Are the algebras defined by these two star-products isomorphic ? More strongly - are these star products "equivalent" ? (See definition of equivalence in Stefan Waldmann's answer below or in Kontsevich paper).</p> <hr> <p>Some comments. </p> <p>If our manifold is R^2n we canonical Poisson bracket { p_i q_j } = delta_ij, then undoubtly the quantum algebra should be unique and isomorphic to Heisenberg algebra: $[ \hat p_i , \hat q_j ] = delta_{ij}$. But I am not sure that even in this case it is that much obvious - if we change coordinates we can make Poisson bracket arbitrary weird, it would be non-obvious that we get isomorphism with Heisenberg algebra. And moreover it might depend on category we are working with (polynomial or smooth functions).</p> <p>More general example is Lie algebra $g$ - corresponding quantization should be isomorphic to universal enveloping algebra, but again it is not that much obvious. (In Kontsevich paper he devoted some special (small) arguments to prove that his quantum algebra is isomorphic to U(g)).</p> <p>Concerning choices of different coordinates in classical algebra A - it already states in Kontsevich paper that obtained algebras will be isomorphic. More strongly star-products will be "equivalent".<br> See the last formula on the page 3 of his paper. However nothing is said about choices of different propagators </p> <p>I had discussed this question with some experts some years ago, but there was no clear answer. </p> <p>The motivation to ask partly comes from MO-discussions here: <a href="http://mathoverflow.net/questions/119849/quantization-of-a-classical-system-e-g-the-case-of-a-billard" rel="nofollow">http://mathoverflow.net/questions/119849/quantization-of-a-classical-system-e-g-the-case-of-a-billard</a></p> http://mathoverflow.net/questions/119849/quantization-of-a-classical-system-e-g-the-case-of-a-billard/119930#119930 Answer by Alexander Chervov for Quantization of a classical system (e.g. the case of a billard) Alexander Chervov 2013-01-26T10:30:34Z 2013-01-26T10:30:34Z <p>Let me add some comments. I think the question has many faces: 1) general principles of correspondence classical to quantum world 2) quite a concrete question about boundary conditions for quantization of billiards.</p> <p>About (1) I have written something in <a href="http://mathoverflow.net/questions/106721/quantum-mechanics-basics/106723#106723" rel="nofollow">http://mathoverflow.net/questions/106721/quantum-mechanics-basics/106723#106723</a> I can add more, but not sure it is appropriate...</p> <p>About (2), let me add some comments, it is not full answer, but may be still of some use.</p> <p>So Joel asks " But I am not sure why the wave function should be defined on R^2 instead of just on B, and even while it should be continuous." </p> <p>Yes, I think from physical point of view it should be defined on R^2 and should be continuous, let me explain some arguments which come to my mind.</p> <p>How can you confine a particle to restricted billiard region "B" in practice ? What physical experiment you keep in mind ? </p> <p>The answer is the following - let us create a potential barrier with very high energy U(x) = U_0 - outside "B" and U(x) = 0 inside "B". Well, actually I think such discontinuous potential barrier is not practical, but we can smooth as much as we want.</p> <p>Classical particle with energy &lt; U_0 cannot go outside the barrier, but quantum particle can make tunneling inside barrier with exponentially decaying wave function.</p> <p>Now we just want to consider the limit U_0 -> infinity. That would correspond to confining quantum particle to the region "B", again in practice there are NO infinities, so always small probability for particle to be outside region B, but as mathematical abstraction it is Okay to take U_0 = inf.</p> <p>So now we come to mathematically well-formulated questions :</p> <p>Consider smooth potentials U_n(x) which approximate U(x), where U(x) =inf in R^2\B and U(x) = 0, inside B. Consider the wave functions Psi(x) which is solution of the corresponding problem (Laplace + U_n(x) ) \Psi_n(x) = \Lambda Psi_n(x)</p> <p>0) Is it true the limit \Psi (x) does not depend on approximating sequence U_n(x) ?</p> <p>1) Is it true that limit Psi_n (x) is continuouos ? </p> <p>2) Is it true that Psi_n(x) = 0 outside B (including the boundary) ?</p> <p>I hope the answer is YES, on both questions, but I am not sure I know the arguments. </p> <p>It is better to start with these question on R^1 not R^2 - this is done in any quantum mechanics textbook, I am sorry I a little forget the details.</p> http://mathoverflow.net/questions/119713/financial-mathematics-books/119776#119776 Answer by Alexander Chervov for Financial Mathematics Books Alexander Chervov 2013-01-24T19:02:12Z 2013-01-24T19:02:12Z <p>Some expert (physicist, working partly in finance) recommended me the book: </p> <p>Jean-Philippe Bouchaud, Marc Potters (2003). Theory of Financial Risk and Derivative Pricing: From Statistical Physics to Risk Management <a href="http://www.amazon.com/Theory-Financial-Risk-Derivative-Pricing/dp/0521741866/ref=sr_1_fkmr0_1?s=books&amp;ie=UTF8&amp;qid=1359053375&amp;sr=1-1-fkmr0&amp;keywords=Jean-Philippe+Bouchaud%2C+Marc+Potters+%282003%29.+Theory+of+Financial+Risk+and+Derivative+Pricing.+Cambridge+University+Press++%5BAmazon%5D%5B1%5D" rel="nofollow">Amazon</a></p> <p>It is <a href="http://en.wikipedia.org/wiki/Econophysics" rel="nofollow">econophysics</a> approach to analysis of financial markets. It uses quite advanced mathematics including random matrices, stable distributions and so on. One can also look for the papers by these authors in <a href="http://arxiv.org/find/all/1/all%3A+AND+Bouchaud+Potters/0/1/0/all/0/1" rel="nofollow">arxiv</a>.</p> <p>Another expert recommended me the following site: <a href="http://www.opentradingsystem.com/quantNotes/main.html" rel="nofollow">http://www.opentradingsystem.com/quantNotes/main.html</a> about quantative finances,</p> <p>and the book " <a href="http://www.amazon.com/High-Frequency-Trading-Practical-Algorithmic-Strategies/dp/1118343506/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1359054076&amp;sr=1-1&amp;keywords=High-Frequency+Trading.+A+Practical+Guide+to+Algorithmic+Strategies+and+Trading+Systems" rel="nofollow">High-Frequency Trading. A Practical Guide to Algorithmic Strategies and Trading Systems</a>" IRENE ALDRIDGE </p> http://mathoverflow.net/questions/119402/why-all-irreducible-representations-of-compact-groups-are-finite-dimensional-e Why all irreducible representations of compact groups are finite-dimensional ? [EDIT: Subtleties: AC,etc] Alexander Chervov 2013-01-20T15:22:50Z 2013-01-23T04:38:38Z <p>About 20 years ago I read in textbook that "all irreducible representations of compact groups are finite-dimensional", but me and the proof of this fact never met each other :) </p> <p>May I ask dear MO colleagues, is there (simple?) argument to prove it ? </p> <p>As far as I heard this result can be generalized in the realm of non-commutative geometry, Woronowicz compact quantum groups (?). </p> <p>So the "bonus" question - what is appropriate "compactness" condition for some algebra (and/or Hopf algebra) such that it will guarantee the same property (i.e. all irreps are finite-dim.) ?</p> <hr> <p>[EDIT] Thanks very much for excellent answers ! Let me ask about some more details, to finally clarify.</p> <p>1) What is maximal possible relaxation of the requirement on vector space V ? Is it enough to require arbitrary linear topological space or we need to restrict to Hausdorff (?) Banach (?) Hilbert (?), whatever spaces ? (It seems restrictions on the space may come from the Schur lemma, it is not clear for me what is appropriate generality it holds). </p> <p>2) Do we need axiom of choice here ? (Probably not, we need existence of Haar measure, but <a href="http://en.wikipedia.org/wiki/Haar_measure" rel="nofollow">Wikipedia writes</a> that "Henri Cartan furnished a proof of existence of Haar measure which avoided AC use.[4]"</p> <p>3) Informally: what is the hardest tool one uses in the proof ? (May be existence of Haar measure ?)</p> <p>[END EDIT].</p> <hr> <p>[EDIT]</p> <p>Let me add sketch of arguments by Aakumadula, as I understand it. It might be helpful to clarify new questions.</p> <p>1) Tool: Continuous functions on the group can be mapped to operators on V. (Need measure here). (Group algebra acts on V).</p> <p>2) Fact: Continuous function will be mapped to COMPACT operators. (In R^n I know how to prove it, in general no).</p> <p>3) Observe: Conjugation invariant function are mapped to operators which commute with action of group.</p> <p>4) Schur Lemma: operators commuting with group in irrep are Lamda*Id. (What do we need from the space V for this to be true ? )</p> <p>5) Corollary: If we find invariant continuous function which is mapped in NON-zero in V, then we are done, because by (2) it is compact operator and by (4) it is Lambda*Id.</p> <p>So we need to find invariant function which will be non-zero in V.</p> <p>6) Take arbitrary "approximate identity" i.e. sequence of continuous (non-invariant) functions f_n which converge as functionals to delta-function in identity of the group. (It is local fact. But how to prove it ? Do we need Axiom of choice here ? )</p> <p>7) Make averaging over the group of f_n - get sequence of INVARIANT continuous functions which again converge to detla(e), since delta(e) is invariant. </p> <p>8) Operators T(f_n) converge to identity operator, hence for some N they are NON-ZERO. WE ARE DONE by (5) ! Because T(f_N) is compact and Lambda*Id and Lambda is NON-ZERO. </p> <p>[End EDIT].</p> http://mathoverflow.net/questions/119318/soft-voronoi-cells-or-statistical-criterias "Soft" Voronoi cells or statistical criterias Alexander Chervov 2013-01-19T11:16:00Z 2013-01-19T11:16:00Z <p>It is probably some basic statistics question, but... </p> <p><strong>Informally 1</strong>: How to choose "criteria", such that it will guarantee that error decision probability is less than "epsilon", and maximize probability of correct decision ? ("Criteria", "probabilities" will be explained below).</p> <p><strong>Informally 2</strong> consider some points A_i in R^n. Task - for given point "x", find nearest point A_i. It is exactly the same determine to what Voronoi cell point "x" belongs to. Now imagine that point "x" is exactly on the same distance between several points A_i, then the answer to the question is not unique. Now let us add some randomness in this setup (see details below) so we should expect that if point "x" is "near" the edge of the Voronoi cell, our decision to say that the closest point is say A_1 will not be reliable. So I need to "decrease" Voronoi cells such that I can guarantee reliability is bigger than (1-epsilon).</p> <p>My question will be - what is the shape of these "epsilon-soften" Voronoi cells ? (Of course, I will need to specify "probabilities" - see below).</p> <hr> <p><strong>Setup for formal question</strong> Consider some points S_i in R^n. Consider some R^n-valued random vector "N" (say Gaussian) (N - is "noise"). Define R=S+N, where S is random variable which uniformly takes values S_i (S-"sent signal", "R"-"received signal").</p> <p><strong>Question</strong> How to define subsets D_i in R^n, such that:</p> <p>1) Probability that "R" belongs to D_i, under condition that $S\ne A_i$ is less than epsilon (small probability of incorrect signal detection) </p> <p>2) For all other choices of subsets D_i satisfying (1), the probability of "R" belongs to D_i, under condition that $S=A_i$ is maximal possible over all choices of subsets D_i satisfying (1).</p> <hr> <p>D_i - are our "soft" Voronoi cells.</p> <p>The informal sense is the following - D_i are some neigbourhouds of points A_i, which are small enough to guarantee (1) (incorrect decision has small probability), but the biggest among all subsets satisfying (1) (we want to maximize the probability of correct decision).</p> <hr> <p><strong>Remark</strong> In R^1 this is standard simple statistics knowledge, but the question seems to be non-trivial even in R^2.</p> <p><strong>SubQuestion 1</strong> What are the references ?</p> <p><strong>SubQuestion 2</strong> Is there some simple knowledge that everybody must understand before start thinking on the question ?</p> <p><strong>SubQuestion 3</strong> Is the question difficult ? Or there is some simple solution ? (Yes/No)</p> <p><strong>SubQuestion 4</strong> Is it true that D_i cutted by hyperplanes (similar to Voronoi cells) ? (Yes/No)</p> <p><strong>SubQuestion 5</strong> If problem is difficult in general, then is there some approximate solution (algorithm), which satisfy (1), and gives non-bad maximization in (2) ? </p> http://mathoverflow.net/questions/116531/the-unreasonable-effectiveness-of-physics-in-mathematics-why-what-how-to-catch The Unreasonable Effectiveness of Physics in Mathematics. Why ? What/how to catch? Alexander Chervov 2012-12-16T15:42:04Z 2013-01-07T19:21:26Z <p>Starting from 80-ies the ideas either coming from physics, or by physicists themselves (e.g. Witten) are shaping many directions in mathematics. It is tempting to <a href="http://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences" rel="nofollow">paraphrase E. Wigner</a>, saying about "The Unreasonable Effectiveness of Physics in Mathematics". </p> <p>What can be the reasons for it ? </p> <p>Do physicists have some tools/ideas/techniques which allow them to make insights, which are not seen for mathematicians? Or it is just because Witten&amp;K are very ... very smart ?</p> <p>If, yes, <strong>what are these tools/ideas ? How to learn/absorb/(put into math. framework) them</strong>? </p> <p>What can be the further applications of these ideas ? </p> <hr> <p>Being a mathematician, but working in physicists surrounding for many years, I have thought on this questions for quite a while. The recent MO question <a href="http://mathoverflow.net/questions/116251/mathematician-trying-to-learn-string-theory/116402" rel="nofollow">http://mathoverflow.net/questions/116251/mathematician-trying-to-learn-string-theory/116402</a> prompts me to ask it here.</p> <p>I would think, that yes, there are such "ideas". But from some outstanding mathematicians I've heard an opposite opinion.</p> <p>My vague feeling is that quantum field theory and string theory it is something like an analysis/differential geometry on infinite-dimensional manifolds. But these manifolds are not abstract, say Banach modeled manifolds, which theory is not so rich, but kind of maps from one finite-dim. manifold to another, which has certain specific structures which are not fully revealed by mathematicians. E.g. vertex operator algebras, arise from maps of circle to manifold, if we map not circle but something higher dimensional there should be something more complicated. Another issue is about Feynman integral, which allows physicist to use integration techniques in geometric problems, it is not well-defined mathematically, but it might be it cannot be defined in very general form of infinite-dimensional integrals, but again physicist have an intuition where it can be defined, where cannot, and proper mathematical theory should first clarify the setup where it exists, rather than trying to build general theory which might not exist. These words are probably very vague, so might be answers help to me clarify.</p> <hr> <p>I think everybody knows the influence of physics happened from 80-ies, but for completeness let me mention just a few.</p> <p>Donaldson used instanton moduli spaces in his study of 4-folds.</p> <p>Faddeev, Drinfeld et. al. created quantum groups</p> <p>Representation theory of infinite-dimensional algebras have been large influenced by conformal field theory developments.</p> <p>Witten's contributions are numerous his Fields Medal says more than I can say.</p> <p>Mirror Symmetry, quantum cohomology etc...</p> <p>The works of Fields Medalist Kontsevich and Okounkov are largely influenced by physics.</p> <p>So on an so forth...</p> http://mathoverflow.net/questions/118226/seeing-topological-geom-properties-of-the-space-via-corresponding-c-algebra Seeing topological (geom.) properties of the space via corresponding C^*-algebra Alexander Chervov 2013-01-06T21:42:24Z 2013-01-07T15:35:54Z <p>Compact Hausdorff spaces bijectively correspond to C^*-algebras with identity. One needs to consider the algebra of continuous functions C(X) to go in one direction and spectrum to go in the other. (<a href="http://en.wikipedia.org/wiki/C%2a-algebra#Commutative_C.2A-algebras" rel="nofollow">See e.g. Wikipedia</a>). The situation is similar to algebraic geometry - affine manifolds correspond to commutative algebras... Basic skill in alg.geom. is to recast algebraic properties in geometric and vice versa e.g. projective modules - vector bundles... (the dictionary is lengthy).</p> <p>I wonder about similar correspondence in C^*-algebra setup. In particular:</p> <p><strong>Question 1:</strong> if space "X" is topological manifold (i.e. locally R^n), is there some "nice" way to recognize it via C^*-algebra of continuous function ? (... is there non-commutative version ? ... )</p> <p><strong>Question 2:</strong> if "X" is smooth manifold, is there nice way to recognize it and define sub-algebra of smooth functions entirely in terms of C^*-algebra ? (... is there non-commutative version ? ... )</p> <p><strong>Question 3</strong> Is it possible to characterize the set of all measures on "X" in terms of C(X) ? (... is there non-commutative version ? ... ) </p> <p>If you have further comments how interesting algebraic properties can be recasted in topological or vice versa, you are welcome to post. </p> http://mathoverflow.net/questions/117668/new-grand-projects-in-contemporary-math/118121#118121 Answer by Alexander Chervov for New grand projects in contemporary math Alexander Chervov 2013-01-05T13:10:28Z 2013-01-05T13:10:28Z <p>In information theory (error-correcting codes) the grand achievements in 90-ies are <a href="http://en.wikipedia.org/wiki/Turbo_code" rel="nofollow">turbo-codes</a> and <a href="http://en.wikipedia.org/wiki/Low-density_parity-check_code" rel="nofollow">LDPC codes</a>. Recent 2009 discovery which became hottest topic is <a href="http://en.wikipedia.org/wiki/Polar_code_%28telecommunication%29" rel="nofollow">polar codes</a>. </p> <p>It is tempting to say that paradigm-shift coming with turbo and LDPC codes instead of earlier popular approaches: <a href="http://en.wikipedia.org/wiki/Convolutional_code" rel="nofollow">convolutional codes</a>, <a href="http://en.wikipedia.org/wiki/Reed-Solomon_error_correction" rel="nofollow">Reed-Solomon codes</a>, <a href="http://en.wikipedia.org/wiki/BCH_codes" rel="nofollow">BCH codes</a> et.al. is shift from algebra to probability, from order to chaos. I mean that earlier constructions were much dominated by algebra considerations e.g. non-recursive convolutional codes are just the ideals in the ring $F_2[x]\oplus ... \oplus F_2[x]$. While turbo and LDPC are actually constructed and decoded with methods which much influenced by probabilistic and randomized considerations: roughly speaking good LDPC codes can be constructed by sufficiently sparse and random matrix. The decoding methods used for LDPC - <a href="http://en.wikipedia.org/wiki/Belief_propagation" rel="nofollow">belief propagation</a> naturally belong to probability or machine learning maths. rather than algebra. </p> <p>Actually turbo code is almost the same as convolutional code, modula one "small" detail - interleaver. Interleaver is "radomizer" added to the algebra-tasted convolutional code, it is crucial thing which makes all work. That what concerns the encoder. The decoder of turbo-codes "resembles" turbine and hence the name "turbo"-code, it is crucially based on probabilistic techniques in coding theory. Well, the key technique - BCJR algorithm was developed much earlier, so, of course, all division into old-new paradigms is not very precise, but nevertheless seems there is something behind it.</p> <p>These ideas found rich practical applications. If someone is reading this with the help of smartphone - say thank to "turbo-codes" - they are working there.</p> <p>New discovery - polar codes - probably can be characterized as algebra's strike back - they seems to be quite algebraic nature, sorry I cannot say much for the moment.</p> http://mathoverflow.net/questions/92192/hot-topics-in-error-correcting-coding-related-to-interesting-math Hot-topics in error correcting coding related to interesting math. ? Alexander Chervov 2012-03-25T19:21:31Z 2013-01-04T11:48:52Z <p>What are topics in error-correcting coding which are related to interesting math. ? I am primarely interested in nowdays hot topics, but old days topics are also welcome. </p> <p>Let me try to mention what I heard about.</p> <p>1) Hot topic in error-correction is finding LDPC codes with very low "error-floor" for code lengths dozens thoursands bits, this might be useful for optic transmission. However it is not clear for me what kind of math playing role here ? ("Error-floor" is related with codewords with small Hamming weight. So the code might be quite good - means majority of codewords have big Hamming weight, so in most case code performs well, but very small number having small Hamming weight will cause small number of errors - it can be seen on the BER/SNR plot as a "floor".)</p> <p>2) There is certain number of papers applying number theory (lattices in algebraic number fields) to consruct good codes. One may see papers by F. Oggier, G. Rekaya-Ben Othman, J.-C. Belfiore, E. Viterbo: e.g. this one : <a href="http://arxiv.org/abs/cs/0604093" rel="nofollow">http://arxiv.org/abs/cs/0604093</a>. I am not aware how "hot" is this topic and how far it is from practical applications...</p> <p>3) Polar codes is a hot topic. What kind of math is playing role here ?</p> <p>4) Probably most classical example is the Golay code (1948) and sporadic simple Mathieu groups. Let me quote Wikipedia: <a href="http://en.wikipedia.org/wiki/Binary_Golay_code" rel="nofollow">http://en.wikipedia.org/wiki/Binary_Golay_code</a> : "The automorphism group of the binary Golay code is the Mathieu group . The automorphism group of the extended binary Golay code is the Mathieu group . The other Mathieu groups occur as stabilizers of one or several elements of W." By the way - is it occasional coincidence of there is something behind it ?</p> http://mathoverflow.net/questions/128961/probability-of-random-0-1-toeplitz-matrix-being-invertible/128979#128979 Comment by Alexander Chervov Alexander Chervov 2013-04-28T14:05:03Z 2013-04-28T14:05:03Z It does not depend on size n? Hmmm... http://mathoverflow.net/questions/128178/examples-of-applications-of-the-freyd-mitchell-embedding-theorem/128191#128191 Comment by Alexander Chervov Alexander Chervov 2013-04-21T12:12:54Z 2013-04-21T12:12:54Z What is multiplication on stable homotopy groups of spheres? (To make them ring? ) http://mathoverflow.net/questions/126932/finite-unipotent-groups-references Comment by Alexander Chervov Alexander Chervov 2013-04-10T09:16:56Z 2013-04-10T09:16:56Z And also <a href="http://mathoverflow.net/questions/127010/classification-for-coadjoint-orbits-of-lower-or-upper-triangular-matrices" rel="nofollow" title="classification for coadjoint orbits of lower or upper triangular matrices">mathoverflow.net/questions/127010/&hellip;</a> classification for coadjoint orbits of lower or upper triangular matrices http://mathoverflow.net/questions/126932/finite-unipotent-groups-references Comment by Alexander Chervov Alexander Chervov 2013-04-09T13:27:42Z 2013-04-09T13:27:42Z And also <a href="http://mathoverflow.net/questions/68207/irreducible-representations-of-the-unitriangular-group" rel="nofollow" title="irreducible representations of the unitriangular group">mathoverflow.net/questions/68207/&hellip;</a> irreducible-representations-of-the-unitriangular-group http://mathoverflow.net/questions/126828/irreducible-degrees-and-the-order-of-a-finite-group Comment by Alexander Chervov Alexander Chervov 2013-04-09T12:45:59Z 2013-04-09T12:45:59Z Let me mention: <a href="http://mathoverflow.net/questions/108406/why-would-dim-primitive-irrep-divide-size-of-some-conjugacy-class" rel="nofollow" title="why would dim primitive irrep divide size of some conjugacy class">mathoverflow.net/questions/108406/&hellip;</a> http://mathoverflow.net/questions/106521/representation-theory-of-p-groups-in-particular-upper-tringular-matrices-over-f-p Comment by Alexander Chervov Alexander Chervov 2013-04-09T12:39:22Z 2013-04-09T12:39:22Z Related question: <a href="http://mathoverflow.net/questions/126932/finite-unipotent-groups-references" rel="nofollow" title="finite unipotent groups references">mathoverflow.net/questions/126932/&hellip;</a> http://mathoverflow.net/questions/126932/finite-unipotent-groups-references Comment by Alexander Chervov Alexander Chervov 2013-04-09T12:37:56Z 2013-04-09T12:37:56Z Related question <a href="http://mathoverflow.net/questions/106521/representation-theory-of-p-groups-in-particular-upper-tringular-matrices-over-f-p" rel="nofollow" title="representation theory of p groups in particular upper tringular matrices over f p">mathoverflow.net/questions/106521/&hellip;</a> in comments under it I have collected some references, which might be of interest http://mathoverflow.net/questions/126193/darboux-like-theorem-for-non-degenerate-3-forms-in-6-manifolds Comment by Alexander Chervov Alexander Chervov 2013-04-01T20:23:18Z 2013-04-01T20:23:18Z What is nondegenerate? aa http://mathoverflow.net/questions/126074/anick-resolution Comment by Alexander Chervov Alexander Chervov 2013-04-01T10:27:34Z 2013-04-01T10:27:34Z <a href="http://mathoverflow.net/questions/81415/what-is-growth-of-ass-algebra-with-3-generators-and-relation-a1a2a3-a2a3a1-a/81489#81489" rel="nofollow" title="what is growth of ass algebra with 3 generators and relation a1a2a3 a2a3a1 a">mathoverflow.net/questions/81415/&hellip;</a> here is nice application given by Vladimir Dotsenko http://mathoverflow.net/questions/124772/h-adic-completion-of-u-q-fraksl-2 Comment by Alexander Chervov Alexander Chervov 2013-03-17T11:49:58Z 2013-03-17T11:49:58Z Second Cohomology of semisinple lie alg vanishes. So any deformation is trivial. So the two algs are isomorphic. http://mathoverflow.net/questions/123796/role-of-applications-in-modern-mathematics Comment by Alexander Chervov Alexander Chervov 2013-03-09T07:37:08Z 2013-03-09T07:37:08Z META discussion <a href="http://meta.mathoverflow.net/discussion/1551/role-of-applications-in-modern-mathematics/" rel="nofollow">meta.mathoverflow.net/discussion/1551/&hellip;</a> http://mathoverflow.net/questions/123796/role-of-applications-in-modern-mathematics/123856#123856 Comment by Alexander Chervov Alexander Chervov 2013-03-07T11:33:48Z 2013-03-07T11:33:48Z Thank you for the answer. My question is about applications of math outside math. I do not see the sense of specifying the &quot;application&quot; very precisely - hope everybody understands vague meaning and that is enough. I &quot;assume a good will&quot; - if some one thinks it is worth to write an in the answer about what he thinks deserves to be shared with the community - go on... http://mathoverflow.net/questions/123363/d-modules-as-quantization-of-modules-on-cotangent-bundle Comment by Alexander Chervov Alexander Chervov 2013-03-01T19:46:15Z 2013-03-01T19:46:15Z look also at quantization of lagrangian submanifolds http://mathoverflow.net/questions/122963/statistical-properties-of-principal-components-and-their-convergence-rates Comment by Alexander Chervov Alexander Chervov 2013-02-26T11:30:35Z 2013-02-26T11:30:35Z Be aware of <a href="http://stats.stackexchange.com/" rel="nofollow">stats.stackexchange.com</a> http://mathoverflow.net/questions/101169/not-especially-famous-long-open-problems-which-higher-mathematics-beginners-can/122677#122677 Comment by Alexander Chervov Alexander Chervov 2013-02-23T09:22:33Z 2013-02-23T09:22:33Z What are the references ?
{}
# Finding equivalent resistance for time constant in R-C circuit Given the first order circuit above. The switch has been closed for a long time and is opened at $t=0$. Find the equation for the voltage $vc(t)$ across the capacitor after the switch has been opened. I have determined that $vc(0)=7.619 V$, but cannot find the time constant. Here is my work for attempting to get $R_{th}$. - hint - what is the impedance of a current source? What does that contribute to the total impedance across the capacitor? –  WhatRoughBeast May 18 at 0:42 Does an ideal current source have an impedance? –  Chris Crutchfield May 18 at 0:43 What is the V/I relationship for an ideal current source? –  Zuofu May 18 at 0:49 I see that an ideal current source is supposed to have an infinite impedance, but I'm still not sure how that helps. –  Chris Crutchfield May 18 at 0:56 What is the parallel resistance of a resistor R and an infinite impedance? And how would this affect a time constant or a resistor and a capacitor (RC)? –  WhatRoughBeast May 18 at 2:04 This is slightly tricky given the presence of the controlled source. While I won't work the problem for you, I will tell you that you can combine the controlled current source and the two rightmost resistors into one equivalent resistance $R_{EQ}$ which is then in parallel with the $50k\Omega$ resistor. Here's a hint: the voltage across the two rightmost resistors (and thus, the current source) is just $(15k\Omega + 25k\Omega)i(t)$ But the current through the top wire connecting the capacitor to the current source / resistors is just $0.75 i(t)$. Thus, the equivalent resistance to the right of the capacitor is... simulate this circuit – Schematic created using CircuitLab - Would that mean that the test current should be .75i? Would that mean that R_th should be 53.333k ohms? –  Chris Crutchfield May 18 at 1:45 @ChrisCrutchfield, the equivalent resistance to the right of the capacitor is indeed 53.3k. We didn't use a test current - we just used KCL. However, we could have used a test current. Let the test current (to the right of the capacitor) be 1A. Then, the current through the resistors must be 4/3A in order to satisfy KCL. Thus, the voltage, due to a 1A test current, is (4/3)40k = 53.3kV which implies the equivalent resistance is 53.3k ohms –  Alfred Centauri May 18 at 1:56 Since this is a homework question, I will give an extended hint as an "answer". First, consider what the 'R' portion of the RC time constant means. This is the resistance which the capacitor must discharge through to fall to $e^{-1}$ of its original value. We know that the switch is open, so the left side of the circuit is effectively disconnected, we are concerned only with the right side of the circuit (to the right of the switch, including the 50 K resistor). Normally, if we have only independent sources, we can remove them (by short-circuiting ideal V sources and open-circuiting ideal I sources), but this circuit has a dependent current source, so we cannot do that as easily. However, the circuit is linear, so we can still find the equivalent resistor from the perspective of the capacitor, which is what you have to do to solve for the time constant. Recall the procedure to find the Thevenin resistance in a case with dependent sources. This is done by applying a test voltage (for example $V_{test} = 1 V$) at the terminals, and finding the resulting current ($I_{test}$). The equivalent resistance is then $R_{Th}=\dfrac{V_{test}}{I_{test}}$. - Also note that you do not necessarily need to apply a test voltage and calculate the test current. Sometimes (as I think in this circuit) the math is easier to apply a test current and calculate the test voltage. In either case, as long as you have a V/I pair, you can calculate the equivalent resistance. –  Zuofu May 18 at 1:11 So would my R_th be 40k? –  Chris Crutchfield May 18 at 1:14 I think I made a mistake. Is R_th 22.222k ohms? –  Chris Crutchfield May 18 at 1:21 That's not what I got, upload your work and we can see. Also, ignore what I said about the test current - that's a good approach in general but I think it's a pain in this problem. –  Zuofu May 18 at 1:26 I edited my question with the work to find R_th. –  Chris Crutchfield May 18 at 1:37
{}
# Student Q&A Professor Kedlaya provided answers to some of the questions commonly asked by students in the class. The questions, as well as their answers, are listed below. Are the hypotheses of Hartshorne exercise II.2.18(d) (PS 3) all necessary? No, it is enough to assume that phi is surjective; the hypothesis that f is a homeomorphism onto a closed subset will follow as a consequnece. Help! I'm stuck on the Stone-Cech compactification (PS 3, problem 2). A: Look up the definition of an ultrafilter, and relate Spec F_2^S to those. Then read a proof of the theorem written in the language of ultrafilters. For f: X -> Y a morphism and y in Y a point, what is the definition of the scheme-theoretic fibre X_y? Let k(y) be the residue field of the local ring O_{Y,y}. Then there is a canonical map Spec k(y) -> Y given by choosing an open affine neighborhood U = Spec(A) of y in Y, then noticing that A maps to O_{Y,y} and hence to k(y). Now let X_y be the fibre product of Spec k(y) with Y over X. What does the notation M(n), for M a graded module over a graded ring, mean? I couldn't find the definition in Hartshorn It means that the i-th graded piece of M(n) is the (i+n)-th graded piece of M. Does the proof of Theorem II.5.4 in Hartshorne work without the noetherian hypothesis? Yes and no. The proof does work as written. However, the definition of coherent sheaf given in Hartshorne is not the standard one in the nonnoetherian case; he is defining what I would call a finitely generated quasicoherent sheaf. The correct (as per EGA) definition of coherent sheaf will be given in class later. Does the definition of "locally compact" used in this course include the Hausdorff condition? Yes. According to Bourbaki, "locally compact" means Hausdorff and having a neighborhood basis consisting of open sets with compact closure. Let S be a graded ring generated in degree 1. Do the D_+(f) (called D(f) in class) for f in S_1 form a basis of Proj S? No, they only form a cover of Proj S. For instance, the intersection of D_+(f) with D_+(g) is D_+(fg), but fg is then in degree 2 rather than 1. Suppose X is a topological space covered by the open sets U_i. Let F and G be sheaves on X, and suppose F -> G is a morphism which induces bijections F(U_i) -> G(U_i). Is F a bijection? In general, no. But there is an important special case where this is true: if X is a scheme, each U_i is affine, and F and G are quasicoherent O_X-modules, then this follows from the third fundamental theorem of affine schemes (proof left to the reader). In Hartshorne exercise II.4.8, what exactly is meant by the assertion that "a product of morphisms having property P has property P"? If X,Y,X',Y' are schemes over a common base S, and X -> Y and X' -> Y' are two morphisms (commuting with the maps to S) having P, then X x_S X' -> Y x_S Y' also has P. I'm having trouble dealing with the topological notion of properness (PS 4, exercise 9). The most difficult part is probably (c). Everything can be found in Bourbaki's Topologie Generale, section 1, or the English translation of same, but you might find that difficult to read because it uses the language of filters and ultrafilters. Here is a translation of a key lemma: if f: X -> {point} is proper, then X is quasicompact. Proof sketch: suppose that {U_i} is an open cover of X. Topologize Y = X union {y} with a basis consisting of all subsets of X, plus sets of the form (X - U_{i_1} - ... - U_{i_n}) union {y}. Show that if the closure of {(x,x): x in X} in X x Y has closed image under f x id_Y, then the original cover of X must admit a finite subcover. Does "stable under base change", for a property of morphisms of schemes, imply "local on the base"? It implies only part of that. For a property P of morphisms of schemes, we say P is stable under base change if for any morphism f: Y -> X with P, and any morphism Z -> X at all, the morphism Y x_X Z -> Z has P also. If we take Z to be an open subscheme of X, then Y x_X Z is just the inverse image f^{-1}(U) viewed as an open subscheme of Y. So stability under base change implies that if f has P, then so do the maps f^{-1}(U_i) -> U_i for any open cover U_i of X. We say f is local on the base if this is true and you can go the other way: for any open cover {U_i} of X, if f^{-1}(U_i) -> U_i has P for each i, then f itself has P. Does the category of schemes admit arbitrary (small) colimits? No, it does not even admit finite colimits. It does admit (small) coproducts via the disjoint union, but in general one cannot construct coequalizers, i.e., colimits of diagrams consisting of two morphisms from X to Y. In the category of sets, the coequalizer of f_1, f_2: X -> Y is defined by taking the equivalence relation generated by the pairs (y_1, y_2) in Y x Y with the property that for some x in X, f_1(y_1)= f_2(y_2); the coequalizer is then the quotient by this equivalence relation. Unfortunately, in the category of schemes, it is possible to have equivalence relations (defined in the appropriate categorical fashion) for which the quotient does not exist. Any Moishezon manifold which is not a scheme gives an example; see the appendices to Hartshorne. (More precisely, an equivalence relation on Y is a scheme R carrying two maps R -> Y satisfying the usual three axioms for an equivalence relation, and such that the induced map R -> Y x Y is an immersion. The quotient by R if it exists is the coequalizer of the two maps R -> Y.) By contrast, the category of affine schemes does admit equalizers (and in fact arbitrary small colimits), because the category of rings has arbitrary small limits (reflected from the category of sets). Is the statement of Hartshorne exercise II.5.13 still true without the hypothesis that S is generated by S_1 as an S_0-algebra? I believe so, but I didn't check all the details. I think the assumption is included so that the explanation in terms of the d-uple embedding is valid. How can one approach Hartshorne exercise II.5.14(a)? One approach (possibly not what Hartshorne intended) is to view Spec S as a G_m-bundle over Proj S (i.e., locally a product of Proj S with Spec k[t,t^{-1}] over Spec k). You can then use the fact from class that normality of a ring is a local condition to reduce to checking that if R is a normal domain, then so is R[t,t^{-1}]. Can you explain in more detail what the map Spec (B tensor_A B) -> Spec B is in PS 6, problem 4(a)? Recall that we were given initially a homomorphism f: A -> B of rings. View that as a map of A-algebras, and then tensor over A with B using f. Let's say we're doing this on the right side; you now have a map A tensor_A B -> B tensor_A B computed by a tensor b --> f(a) tensor b. Then identify A tensor_A B with B by identifying 1 tensor b with b. I'm still having trouble with PS 6, problem 4(a) (faithful flat descent along B -> B tensor_A B). In fact, the argument is the same for any faithfully flat morphism B -> C which admits a splitting C -> B in the category of rings; this might simplify your notation. To wit, let M be a C-module equipped with a descent datum. We claim that the underlying B-module is M tensor_C B; to establish this, we must (among other things) exhibit an isomorphism M -> (M tensor_C B) tensor_B C. We get it from the descent isomorphism M tensor_C (C tensor_B C) -> M tensor_C (C tensor_B C), in which on the left side C operates on the first copy of C in C tensor_B C, and on the right side C acts on the second copy of C in C tensor_B C. The two sides of the descent isomorphism are modules over C tensor_B C; using the right action of C on C tensor_B C, we view them both as C-modules. What happens if we now tensor with B? The left side of the isomorphism simply becomes (M tensor_C C) tensor_B (C tensor_B B) = M tensor_B B = M. The right side is a bit trickier: we can pull the first C of C tensor_B C all the way to the outside to get C tensor_B (M tensor_C C tensor_C B) = C tensor_B (M tensor_C B). In other words, the descent isomorphism gives us the map M -> (M tensor_C B) tensor_B C that we were looking for. There is more to do, but I'll leave it to you. You must use the cocycle condition to check that the operation M -> (M tensor_C B) tensor_B C is functorial in M. If you get confused, you might want to keep in mind the example k -> k[x] -> k. How do I show that a finite surjective morphism between nonsingular algebraic varieties over an algebraically closed field is finite (Hartshorne exercise III.9.3)? Note that if x is a nonsingular point on the k-variety X, then the map from the local ring O_{X,x} to its completion is faithfully flat, and the latter is isomorphic to a power series ring k[[x_1, ..., x_n]] by the Cohen structure theorem. Using this, one can reduce to checking: if f: R -> R is a finite morphism for R = k[[x_1, ..., x_n]], then f is flat. To prove this, you may want to use the Auslander-Buchsbaum formula (Hartshorne Proposition III.6.11A); you will need to know that if A -> B is a finite injective morphism of local rings, the depth of a B-module M is the same as the depth of M considered as an A-module. In the proof of Hartshorne Proposition II.7.3, it is shown that assuming conditions (1) and (2), then the map phi is injective on closed points. Why does it follow that phi is injective on all points? Suppose the point z in P^n_k has more than one point in its inverse image. Let Z be the closure of z in P^n_k, let y be any point in the inverse image of z, and let Y be the closure of y in X. Since phi is proper, it is a closed map, so phi(Y) is a closed subset of P^n_k contained in Z (by continuity) but containing z (since phi(y) = z). We must thus have phi(Y) = Z. That is, for each point y in phi^{-1}(z), the closure of y in X surjects onto Z. So now we hit trouble: if y_1, y_2 are distinct preimages of Z with closures Y_1, Y_2, then any closed point of Z has one preimage in Y_1 and one in Y_2, which must then coincide. Since closed points are dense in both Y_1 and Y_2 (closed points are dense in any quasiprojective scheme over a field; I think that was an exercise a while back), this forces Y_1 = Y_2. But then y_1 = y_2 since they are the generic points of Y_1 and Y_2, contradiction. Again in the proof of Hartshorne Proposition II.7.3, while we are trying to prove that phi is projective, Corollary II.5.20 is invoked. But doesn't that statement require phi to be projective already, not just proper? Yes it does! This is a real problem with the argument. Corollary II.5.20 turns out to be true for a proper morphism, but some proof is required. It can be deduced from Chow's lemma (exercise II.4.10) as follows. (I think this is the approach used in EGA III but I haven't checked yet.) Namely, if f: X -> Y is proper, then we can find g: X' -> X a birational regular morphism such that X' -> Y is projective. Here birational means that there is a dense open subset U of X over which g is an isomorphism. If F is a coherent sheaf on X, then g^* F is a coherent sheaf on X', so Corollary II.5.20 implies that f_* g_* g^* F is a coherent sheaf on S. There is an adjunction map F -> g_* g^* F which is an isomorphism over U, so the kernel H is supported on X-U. If I can show that f_* H is coherent, then I'll be done because f_* F will then be trapped in an exact sequence between the coherent sheaves f_* H and f_* g_* g^* F. I do this by finding a closed subscheme Z supported on X-U such that H is the pushforward of a coherent sheaf on Z. (What this is saying on affines is that if you have a finitely generated module M over a noetherian ring R, and it is supported entirely on V(I) for some ideal I, then there is another ideal I' with the same support as I such that I'M = 0.) Now I can replace X -> Y with Z -> Y and argue by induction on the dimension of Z. What is the correct statement of Hartshorne Proposition 8.12? The exact sequence given seems to mix sheaves on X and Z. One way you can say it is to write it using sheaves on X. Letting j denote the closed immersion Z -> X, we have J/J^2 --> Omega_{X/Y} tensor_{O_X} j_* O_Z --> j_* Omega_{Z/Y} --> 0. Or you can note that from Proposition 8.4A, the sheaf on the left is the pushforward of a sheaf F on Z, which coincides with j^* (J/J^2). So we also have j^* (J/J^2) --> j^* Omega_{X/Y} --> Omega_{Z/Y} --> 0. The example of a flat family in Hartshorne Example III.9.8.4 seems a bit mysterious. Where does the ideal I come from? It is obtained as follows. First, working in k[x,y,z,a,a^{-1}], find the defining ideal for the scheme \$X_a\$ by eliminating t from the parametric equations. Then take generators for this ideal and multiply by a suitable power of a to obtain generators of some ideal in k[x,y,z,a]. That only gives you the desired I if k[x,y,z,a]/I is flat over k[a], or equivalently torsion-free. That will fail precisely if there is a-torsion in the quotient, i.e., if you can form a combination of generators of the ideal which is divisible by a, but the result is no longer in the ideal. Once you "saturate" to eliminate the a-torsion in the quotient, you have a flat family. In Hartshorne exercise II.6.1, once I have an exact sequence 0 -> Z -> Cl(X x P^n) -> Cl(X) -> 0, how do I show that it splits? Use the map from Cl(X) to Cl(X x P^n) taking a divisor D to its inverse image in X x P^n. I got 2g-2 instead of g+1 in Hartshorne exercise IV.1.6. What did I do wrong? Instead of using the canonical divisor, pick a divisor D of degree as small as possible such that l(D) > = 2. On PS 7, problem 8, to exhibit a group of automorphisms of order 168, is it sufficient to exhibit subgroups of orders 2^3, 3, 7? In principle, no, because they might generate a larger group. (You would then have to write down relations between the elements to show that this does not happen.) But if you assume characteristic 0, then the Hurwitz bound rules this out. I decided to allow this assumption retroactively. I need a hint on how to write down a nonzero section of the canonical sheaf on a hyperelliptic curve (over a field of characteristic different from 2). Suppose your curve has an affine part of the form Spec k[x,y]/(y^2 - P(x)) with P of odd degree 2g+1 having no repeated roots. Then dx/y = 2 dy/P'(x); these two representations together give a section of omega defined everywhere in the affine part where either y or P'(x) are nonzero. But since P has no repeated roots, this is all of the affine part. Moreover, you can compute the order of vanishing of x, y at the unique point at infinity (by rewriting the defining equation in terms of 1/x and 1/y, which are both regular there) and see that dx/y is holomorphic also at infinity. What is the length of a scheme (PS 7, problem 11)? It is the length of the longest chain of (nonisomorphic) closed immersions ending at your scheme (with starting index 0). For instance, the scheme Spec Z/p^n Z has length n because Spec Z/Z -> Spec Z/p Z -> Spec Z/p^2 Z -> ... -> Spec Z/p^n Z is a chain of length n. It can be shown that if (and only if) k is an algebraically closed field, the length of Spec A for A an artinian k-algebra is equal to dim_k A. So then what do I need to do on PS 7, problem 11? Relate the length of the intersection to the vanishing of some section of the canonical divisor on C itself. (This really has nothing specific to do with the canonical divisor; any divisor giving an embedding works the same way.) I'm confused about PS 8, problem 7(b). Am I supposed to construct a commuting diagram? No, that may not be possible in general. All you have to do is find a quasi-isomorphism D^. -> J^. with J^. a complex of injectives, and a morphism I^. -> J^. which on cohomology induces the map that you get from f by using the quasi-isomorphisms C^. -> I^. and D^. -> J^. to identify cohomology. You need not ensure that C^. -> D^. -> J^. and C^. -> I^. -> J^. are the same maps! What does the suggestion about the pushout mean in PS 8, problem 3? It means that instead of directly comparing what happens when you consider monomorphisms u: A -> B and u': A -> B' for which T^i(u) = T^i(u'), you should reduce to the case when these are related by a map B -> B'. You do that by forming the pushout of u and u'. What is the basic idea behind PS 8, problem 7(a)? Say C^. is your original complex, and you have defined a complex of injectives up to I^i and comparison maps up to C^i -> I^i. The kernel of the arrow I^i -> I^{i+1} must then have kernel equal to the image of ker(C^i -> C^{i+1}) under C^i -> I^i, plus the image of I^{i-1} -> I^i. Let J be the quotient of I^i by that stuff. Now form the pushout of C^i -> C^{i+1} and C^i -> J and stick that into an injective. Isn't the corollary of PS 8, problem 13 already a corollary of the simpler fact that M is flat if and only if Tor_1(R/I,M) = 0 for any ideal I of R? Yes, it is. How do I compute the cohomology in Hartshorne exercise III.2.7(b)? You may want to use the fact that any open cover can be refined to a finite cover by open intervals, and then further refined by replacing each interval with a subinterval whose closure is contained in the original interval. Then use partitions of unity. Why doesn't Hartshorne exercise III.2.1 contradict the acyclicity theorem for affine schemes? Because the sheaf Z_U is not quasicoherent! Should I be using cohomology to solve Hartshorne III.3.2? You can do it either with or without cohomology. In both cases, what you should be trying to prove is: if X is reduced and noetherian, and is the union of two disjoint closed subschemes X_1, X_2 which are both affine, then X itself is affine (then argue by induction on the number of components). For a cohomological argument: one must show that the higher cohomology of a coherent sheaf F on X is zero. Use the ideal sheaves defining X_1 and X_2 to put F in the middle of a short exact sequence. For a direct argument: put X_1 = Spec R, X_2 = Spec S, X_1 cap X_2 = Spec T; we then have surjective ring homomorphisms R -> T and S -> T. Let U be the kernel of R oplus S -> T; then there is a map X -> Spec U by adjunction, which we want to be an isomorphism. But now we have a claim that we can check locally on Spec U. I gather that Hartshorne exercise II.1.16(b) requires Zorn's Lemma, but I don't quite understand how to apply it here. Recall that the statement is: if F is a flasque sheaf on a topological space X, and 0 -> F -> G -> H -> 0 is an exact sequence of sheaves (of abelian groups, say), then Gamma(X, G) -> Gamma(X, H) is surjective. To prove this, choose a section s in Gamma(X, H). We know that X is covered by open sets U such that s lifts to Gamma(U, G). Consider the collection of such open sets U; it has the property that every chain of such open sets has an upper bound (namely its union, by the sheaf axiom). So Zorn's lemma implies that there is a maximal element U, i.e., an open set U such that s lifts to Gamma(U, G) but not to Gamma(V, G) for any open set V properly containing U. We claim this forces U = X. Suppose the contrary; pick a point x in X not in U. We can then find an open subset V of X containing x such that s lifts to a section t_2 in Gamma(V, G). We already know that s lifts to a section t_1 in Gamma(U, G). After restricting to Gamma(U \cap V, G), we have t_1 - t_2 = 0; so by exactness, this difference lifts to a section u in Gamma(U \cap V, F). By flasqueness of F, this section extends to a section u in Gamma(X, F). So now the restrictions of t_1 and (t_2 + u) to Gamma(U \cap V, G) agree, which means we can glue them to get a section of Gamma(U \cup V, G). This maps to s in Gamma(U \cup V, H) because u maps to 0 as a section of H by exactness of the sequence. This contradicts the choice of U, so we must indeed have U = X. If you know about ordinals, you might prefer to argue using transfinite induction instead. The idea is the same: if U_i is an increasing sequence of open subsets over which s lifts to a section of G, we can always extend it unless it has a last element equal to X. At nonlimit stages we do this as in the previous argument; at limit stages we do it by taking the union. Is there a simple solution of PS 9, exercise 1 not using spectral sequences? Yes; here is a solution suggested by Fucheng Tan. Let 0 -> F -> G -> H -> 0 be a short exact sequence of sheaves on U with F as given and G flasque. Claim: the sequence 0 -> F(U) -> G(U) -> H(U) -> 0 is exact, or equivalently, any section s in H(U) lifts to G(U). Proof: we know that U admits a cover by basic open subsets V_i such that the restriction of s to V_i lifts to a section s_i in G(V_i). For each pair i, j, the difference s_i - s_j is the zero section of G(V_i cap V_j), so comes from a unique element f_{ij} of F(V_i cap V_j). Moreover, the f_{ij} form a Cech 1-cocycle for the covering by the V_i, so by our hypothesis on F, they also form a 1-coboundary. That is, there exist g_i in F(V_i) such that g_i - g_j = f_{ij}. Now the sections s_i - g_i in G(V_i) glue to a section in G(U) lifting s. It follows that 0 -> C-cech(U, F) -> C-cech(U, G) -> C-cech(U, H) -> 0 is exact (since we need only work with basic open coverings). If we take the long exact sequence in cohomology, the first connecting homomorphism is zero (because H-cech^0 = H^0 always and we just checked that the H^0 give an exact sequence) and the higher G terms vanish because G is flasque. Hence H also satisfies the input hypothesis (i.e., it is Cech-acyclic on any basic open). We can now perform a dimension shifting argument: we get the isomorphism H-check^1(U, F) -> H^1(U, F) = 0 by comparing long exact sequences, and the higher ones by comparing to H. We now know that F is sheaf-acyclic on each basic open U. By the Leray theorem (and the niceness of the basis), it follows that the Cech cohomology of F for any basic open covering computes sheaf cohomology. Is a coherent sheaf necessarily (locally) finitely generated? Yes, that is part of the definition. How do I get started on PS 11, problem 5(a)? Remember that the property of a sheaf being finitely generated is by definition a local property. So we can check the finite generation of ker(phi) locally. In particular, we need only check it on an open set U for which F itself is generated by finitely many sections over U. We can then add phi to a surjection O_X^m -> F to get another surjection, and proceed from there. If A is a coherent ring, is the structure sheaf on Spec A necessarily coherent? Yes. This follows from the fact that if A is a coherent ring, then so is the localization A_f for any f in A. That can be checked directly from my definition, or it can be deduced by establishing an alternate criterion: a ring is coherent if and only if any finitely generated ideal is finitely presented. I'm confused about the Hilbert function (PS 10, problem 11(b)). That's because I was confused about it too. The classical definition is not the one I gave: if X is a closed subscheme of P^r_k = Proj S for S = k[x_0, ..., x_n], and I is the saturated ideal defining S, then the Hilbert function is n -> dim_k (S/I)_n. For n large, this agrees with the Hilbert polynomial and hence with the number I wrote down, but not for small n; e.g., if X is zero-dimensional, then my Hilbert function is constant but the classical one is not.
{}
## polarity https://doi.org/10.1351/goldbook.P04710 When applied to solvents, this rather ill-defined term covers their overall @S05747@ capability (@S05747@ @P04792@) for solutes (i.e. in chemical equilibria: reactants and products; in reaction rates: reactants and @A00092@; in light absorptions: ions or molecules in the ground and @E02257@), which in turn depends on the action of all possible, nonspecific and specific, @I03098@ interactions between solute ions or molecules and solvent molecules, excluding such interactions leading to definite chemical alterations of the ions or molecules of the solute. Occasionally, the term solvent polarity is restricted to nonspecific solute/solvent interactions only (i.e. to @V06597@).
{}
# quaintitative I write about my quantitative explorations in visualisation, data science, machine and deep learning here, as well as other random musings. For more about me and my other interests, visit playgrd or socials below ## Setting Up a Data Lab Environment - Part 6 - Serve a Flask Adding Flask to the mix isn’t strictly required for this. But I think it’s great to be able to • do the analysis; • save the data to either the Postgres or Mongo database; and • serve the analysis from a Flask server I explained how to set up a Flask server in a previous post. Here, we show how to use Docker to do the same. flaskapp: ports: - "80:80" volumes: - ./data:/app/data environment: We build an image from Dockerfile in the folder ‘docker/flask’ (which I will go through next); connect the container’s port 80 to port 80 in the outside world, map the volumes, and then set the variables and commands needed to get Flask up and running. Next, we create a folder for flask in the docker folder that we had created previously. Within it, we create an app folder, and a Dockerfile. In the Dockerfile, we pull a Docker image, and then install some libraries and copy the Flask app files from the local machine to the container’s app folder. FROM tiangolo/uwsgi-nginx-flask:python3.6 RUN pip install pymongo RUN pip install psycopg2 RUN pip install tweepy COPY ./app /app And that’s it. You can just adapt the files in the app folder I provided. It goes slightly beyond what I covered on Flask previously, but I will go into more details on Flask in subsequent posts. The files for this tutorial are available here.
{}
# Recent Developments in Quantum Affine Algebras and Related Topics : Representations of Affine and Quantum Affine Algebras and Their Applications, North Carolina State University, May 21-24, 1998 Edited by  , Edited by
{}
# Control of Fractional Heat Equation We use DyCon Toolbox for solving numerically the following control problem: given any $T>0$, find a control function $g\in L^2( ( -1 , 1) \times (0,T))$ such that the corresponding solution to the parabolic problem satisfies $z(x,T)=0$. Here, for all $s\in(0,1)$, $(-d_x^2)^s$ denotes the one-dimensional fractional Laplace operator, defined as the following singular integral ## Discretization of the problem As a first thing, we need to discretize \eqref{frac_heat}. Hence, let us consider a uniform N-points mesh on the interval $(-1,1)$. N = 50; xi = -1; xf = 1; xline = linspace(xi,xf,N+2); xline = xline(2:end-1); Out of that, we can construct the FE approxiamtion of the fractional Lapalcian, using the program FEFractionalLaplacian developped by our team, which implements the methodology described in [1]. s = 0.8; A = -FEFractionalLaplacian(s,1,N); M = massmatrix(xline); Moreover, we build the matrix $B$ defining the action of the control, by using the program “BInterior” (see below). a = -0.3; b = 0.8; B = BInterior(xline,a,b,'Mass',true); We can then define a final time and an initial datum FinalTime = 0.5; Y0 =sin(pi*xline); and construct the system dynamics = pde('A',A,'B',B,'InitialCondition',Y0,'FinalTime',FinalTime,'Nt',100); dynamics.mesh = xline; dynamics.MassMatrix = M; solve(dynamics); Error using ode/set.InitialCondition (line 353) The Initial Condition must be a matrix: [50x1] Error in ode (line 264) obj.InitialCondition = p.Results.InitialCondition; Error in pde (line 1) classdef pde < ode dynamics = pde('A',A,'B',B,'InitialCondition',Y0,'FinalTime',FinalTime,'Nt',100); Y = dynamics.StateVector.Symbolic; U = dynamics.Control.Symbolic; ## Construction of the control problem Secondly, we construct the control problem, which consists in minimizing the functional In this case, we choose the classical HUM functional in which and Moreover, we set the final target to $y(T)=0$. dx = xline(2)-xline(1); YT = 0.0*xline; epsilon = dx^4; Psi = @(T,Y) (1/(2*epsilon))*dx*(YT.' - Y).'*(YT.' - Y); L = @(t,Y,U) (1/2)*dx*(U.'*U); iCP1 = Pontryagin(dynamics,Psi,L); ## Solution of the minimization problem As a final step, we use the gradient method we developed for solving the minimization problem and computing the control. In this case, we choose to use the Adaptive Gradient Descent algorithm. tol = 1e-4; U0 = dynamics.Control.Numeric; %% [Uopt , JOpt] = fminunc(@(U) Control2Functional(iCP1,U),U0,options) f0 = dynamics.InitialCondition*0; As we see, the algorithm has stopped since it has reached the maximum number of iterations allowed, and not because it has encountered a minimum of the functional $J$. Actually, we can see in the figure below that the final state is not controlled to zero. %%plot(iCP1) This is because the HUM functional $J$ we chose to minimize is not suitable for numerical implementation. Indeed, as it has been pointed out in [2], even though $J$ has a unique minimizer, it can be a difficult task to compute it numerically. Hence, it is convenient to deal with a penalized version of our optimization problem, applying the well-known Penalized Hilbert uniqueness method. This will be the scope of a future post. solve(dynamics) dynamics.label = 'Free'; iCP1.Dynamics.label = 'Control'; animation([iCP1.Dynamics,dynamics],’YLim’,[-1 1],’xx’,0.05) function M = massmatrix(mesh) N = length(mesh); dx = mesh(2)-mesh(1); M = 2/3*eye(N); for i=2:N-1 M(i,i+1)=1/6; M(i,i-1)=1/6; end M(1,2)=1/6; M(N,N-1)=1/6; M=dx*sparse(M); end ## References [1] U. Biccari and V. Hern'andez-Santamar'ia - \textit{Controllability of a one-dimensional fractional heat equation: theoretical and numerical aspects}, IMA J. Math. Control. Inf., to appear
{}
# Inverse Functions ### LEARNING OBJECTIVES By the end of this lesson, you will be able to: • Verify inverse functions. • Determine the domain and range of an inverse function, and restrict the domain of a function to make it one-to-one. • Find or evaluate the inverse of a function. • Use the graph of a one-to-one function to graph its inverse function on the same axes. A reversible heat pump is a climate-control system that is an air conditioner and a heater in a single device. Operated in one direction, it pumps heat out of a house to provide cooling. Operating in reverse, it pumps heat into the building from the outside, even in cool weather, to provide heating. As a heater, a heat pump is several times more efficient than conventional electrical resistance heating. If some physical machines can run in two directions, we might ask whether some of the function "machines" we have been studying can also run backwards. Figure 1 provides a visual representation of this question. In this section, we will consider the reverse nature of functions. Figure 1. Can a function "machine" operate in reverse? ## Verifying That Two Functions Are Inverse Functions Suppose a fashion designer traveling to Milan for a fashion show wants to know what the temperature will be. He is not familiar with the Celsius scale. To get an idea of how temperature measurements are related, he asks his assistant, Betty, to convert 75 degrees Fahrenheit to degrees Celsius. She finds the formula $C=\frac{5}{9}\left(F - 32\right)$ and substitutes 75 for $F$ to calculate $\frac{5}{9}\left(75 - 32\right)\approx {24}^{ \circ} {C}$ . Figure 2 Knowing that a comfortable 75 degrees Fahrenheit is about 24 degrees Celsius, he sends his assistant the week’s weather forecast for Milan, and asks her to convert all of the temperatures to degrees Fahrenheit. At first, Betty considers using the formula she has already found to complete the conversions. After all, she knows her algebra, and can easily solve the equation for $F$ after substituting a value for $C$ . For example, to convert 26 degrees Celsius, she could write $\begin{cases}26=\frac{5}{9}\left(F - 32\right)\qquad \\ 26\cdot \frac{9}{5}=F - 32\qquad \\ F=26\cdot \frac{9}{5}+32\approx 79\qquad \end{cases}$ After considering this option for a moment, however, she realizes that solving the equation for each of the temperatures will be awfully tedious. She realizes that since evaluation is easier than solving, it would be much more convenient to have a different formula, one that takes the Celsius temperature and outputs the Fahrenheit temperature. The formula for which Betty is searching corresponds to the idea of an inverse function, which is a function for which the input of the original function becomes the output of the inverse function and the output of the original function becomes the input of the inverse function. Given a function $f\left(x\right)$ , we represent its inverse as ${f}^{-1}\left(x\right)$ $"f$ inverse of $x.\text{"}$ The raised $-1$ is part of the notation. It is not an exponent; it does not imply a power of $-1$ . In other words, ${f}^{-1}\left(x\right)$ does not mean $\frac{1}{f\left(x\right)}$ because $\frac{1}{f\left(x\right)}$ is the reciprocal of $f$ and not the inverse. The "exponent-like" notation comes from an analogy between function composition and multiplication: just as ${a}^{-1}a=1$ (1 is the identity element for multiplication) for any nonzero number $a$ , so ${f}^{-1}\circ f$ equals the identity function, that is, $\left({f}^{-1}\circ f\right)\left(x\right)={f}^{-1}\left(f\left(x\right)\right)={f}^{-1}\left(y\right)=x$ This holds for all $x$ in the domain of $f$ . Informally, this means that inverse functions "undo" each other. However, just as zero does not have a reciprocal, some functions do not have inverses. Given a function $f\left(x\right)$ , we can verify whether some other function $g\left(x\right)$ is the inverse of $f\left(x\right)$ by checking whether either $g\left(f\left(x\right)\right)=x$ or $f\left(g\left(x\right)\right)=x$ is true. We can test whichever equation is more convenient to work with because they are logically equivalent (that is, if one is true, then so is the other.) For example, $y=4x$ and $y=\frac{1}{4}x$ are inverse functions. $\left({f}^{-1}\circ f\right)\left(x\right)={f}^{-1}\left(4x\right)=\frac{1}{4}\left(4x\right)=x$ and $\left({f}^{}\circ {f}^{-1}\right)\left(x\right)=f\left(\frac{1}{4}x\right)=4\left(\frac{1}{4}x\right)=x$ A few coordinate pairs from the graph of the function $y=4x$ are (−2, −8), (0, 0), and (2, 8). A few coordinate pairs from the graph of the function $y=\frac{1}{4}x$ are (−8, −2), (0, 0), and (8, 2). If we interchange the input and output of each coordinate pair of a function, the interchanged coordinate pairs would appear on the graph of the inverse function. ### A General Note: Inverse Function For any one-to-one function $f\left(x\right)=y$ , a function ${f}^{-1}\left(x\right)$ is an inverse function of $f$ if ${f}^{-1}\left(y\right)=x$ . This can also be written as ${f}^{-1}\left(f\left(x\right)\right)=x$ for all $x$ in the domain of $f$ . It also follows that $f\left({f}^{-1}\left(x\right)\right)=x$ for all $x$ in the domain of ${f}^{-1}$ if ${f}^{-1}$ is the inverse of $f$ . The notation ${f}^{-1}$ $\text{"}f$ inverse." Like any other function, we can use any variable name as the input for ${f}^{-1}$ , so we will often write ${f}^{-1}\left(x\right)$ $"f$ inverse of $x."$ Keep in mind that ${f}^{-1}\left(x\right)\ne \frac{1}{f\left(x\right)}$ and not all functions have inverses. ### Example 1: Identifying an Inverse Function for a Given Input-Output Pair If for a particular one-to-one function $f\left(2\right)=4$ and $f\left(5\right)=12$ , what are the corresponding input and output values for the inverse function? ### Solution The inverse function reverses the input and output quantities, so if $\begin{cases}{c}f\left(2\right)=4,\text{ then }{f}^{-1}\left(4\right)=2;\\ f\left(5\right)=12,{\text{ then f}}^{-1}\left(12\right)=5.\end{cases}$ Alternatively, if we want to name the inverse function $g$ , then $g\left(4\right)=2$ and $g\left(12\right)=5$ . ### Analysis of the Solution Notice that if we show the coordinate pairs in a table form, the input and output are clearly reversed. $\left(x,f\left(x\right)\right)$ $\left(x,g\left(x\right)\right)$ $\left(2,4\right)$ $\left(4,2\right)$ $\left(5,12\right)$ $\left(12,5\right)$ ### Try It 1 Given that ${h}^{-1}\left(6\right)=2$ , what are the corresponding input and output values of the original function $h?$ Solution ### How To: Given two functions $f\left(x\right)$ and $g\left(x\right)$, test whether the functions are inverses of each other. 1. Determine whether $f\left(g\left(x\right)\right)=x$ or $g\left(f\left(x\right)\right)=x$ . 2. If either statement is true, then both are true, and $g={f}^{-1}$ and $f={g}^{-1}$ . If either statement is false, then both are false, and $g\ne {f}^{-1}$ and $f\ne {g}^{-1}$ . ### Example 2: Testing Inverse Relationships Algebraically If $f\left(x\right)=\frac{1}{x+2}$ and $g\left(x\right)=\frac{1}{x}-2$ , is $g={f}^{-1}?$ ### Solution $\begin{cases} g\left(f\left(x\right)\right)=\frac{1}{\left(\frac{1}{x+2}\right)}{-2 }\qquad\\={ x }+{ 2 } -{ 2 }\qquad\\={ x }\qquad \end{cases}$ so $g={f}^{-1}\text{ and }f={g}^{-1}$ This is enough to answer yes to the question, but we can also verify the other formula. $\begin{cases} f\left(g\left(x\right)\right)=\frac{1}{\frac{1}{x}-2+2}\\ =\frac{1}{\frac{1}{x}}\qquad \\ =x\qquad \end{cases}$ ### Analysis of the Solution Notice the inverse operations are in reverse order of the operations from the original function. ### Try It 2 If $f\left(x\right)={x}^{3}-4$ and $g\left(x\right)=\sqrt[3]{x+4}$ , is $g={f}^{-1}?$ Solution ### Example 3: Determining Inverse Relationships for Power Functions If $f\left(x\right)={x}^{3}$ (the cube function) and $g\left(x\right)=\frac{1}{3}x$ , is $g={f}^{-1}?$ ### Solution $f\left(g\left(x\right)\right)=\frac{{x}^{3}}{27}\ne x$ No, the functions are not inverses. ### Analysis of the Solution The correct inverse to the cube is, of course, the cube root $\sqrt[3]{x}={x}^{\frac{1}{3}}$ , that is, the one-third is an exponent, not a multiplier. ### Try It 3 If $f\left(x\right)={\left(x - 1\right)}^{3}\text{and}g\left(x\right)=\sqrt[3]{x}+1$ , is $g={f}^{-1}?$ Solution ## Finding Domain and Range of Inverse Functions The outputs of the function $f$ are the inputs to ${f}^{-1}$ , so the range of $f$ is also the domain of ${f}^{-1}$ . Likewise, because the inputs to $f$ are the outputs of ${f}^{-1}$ , the domain of $f$ is the range of ${f}^{-1}$ . We can visualize the situation. Figure 3. Domain and range of a function and its inverse When a function has no inverse function, it is possible to create a new function where that new function on a limited domain does have an inverse function. For example, the inverse of $f\left(x\right)=\sqrt{x}$ is ${f}^{-1}\left(x\right)={x}^{2}$ , because a square "undoes" a square root; but the square is only the inverse of the square root on the domain $\left[0,\infty \right)$ , since that is the range of $f\left(x\right)=\sqrt{x}$ . We can look at this problem from the other side, starting with the square (toolkit quadratic) function $f\left(x\right)={x}^{2}$ . If we want to construct an inverse to this function, we run into a problem, because for every given output of the quadratic function, there are two corresponding inputs (except when the input is 0). For example, the output 9 from the quadratic function corresponds to the inputs 3 and –3. But an output from a function is an input to its inverse; if this inverse input corresponds to more than one inverse output (input of the original function), then the "inverse" is not a function at all! To put it differently, the quadratic function is not a one-to-one function; it fails the horizontal line test, so it does not have an inverse function. In order for a function to have an inverse, it must be a one-to-one function. In many cases, if a function is not one-to-one, we can still restrict the function to a part of its domain on which it is one-to-one. For example, we can make a restricted version of the square function $f\left(x\right)={x}^{2}$ with its range limited to $\left[0,\infty \right)$ , which is a one-to-one function (it passes the horizontal line test) and which has an inverse (the square-root function). If $f\left(x\right)={\left(x - 1\right)}^{2}$ on $\left[1,\infty \right)$ , then the inverse function is ${f}^{-1}\left(x\right)=\sqrt{x}+1$ . • The domain of $f$ = range of ${f}^{-1}$ = $\left[1,\infty \right)$ . • The domain of ${f}^{-1}$ = range of $f$ = $\left[0,\infty \right)$ . Q & A Is it possible for a function to have more than one inverse? No. If two supposedly different functions, say, $g$ and $h$ , both meet the definition of being inverses of another function $f$ , then you can prove that $g=h$ . We have just seen that some functions only have inverses if we restrict the domain of the original function. In these cases, there may be more than one way to restrict the domain, leading to different inverses. However, on any one domain, the original function still has only one unique inverse. ### A General Note: Domain and Range of Inverse Functions The range of a function $f\left(x\right)$ is the domain of the inverse function ${f}^{-1}\left(x\right)$ . The domain of $f\left(x\right)$ is the range of ${f}^{-1}\left(x\right)$ . ### How To: Given a function, find the domain and range of its inverse. 1. If the function is one-to-one, write the range of the original function as the domain of the inverse, and write the domain of the original function as the range of the inverse. 2. If the domain of the original function needs to be restricted to make it one-to-one, then this restricted domain becomes the range of the inverse function. ### Example 4: Finding the Inverses of Toolkit Functions Identify which of the toolkit functions besides the quadratic function are not one-to-one, and find a restricted domain on which each function is one-to-one, if any. The toolkit functions are reviewed below. We restrict the domain in such a fashion that the function assumes all y-values exactly once. Constant Identity Quadratic Cubic Reciprocal $f\left(x\right)=c$ $f\left(x\right)=x$ $f\left(x\right)={x}^{2}$ $f\left(x\right)={x}^{3}$ $f\left(x\right)=\frac{1}{x}$ Reciprocal squared Cube root Square root Absolute value $f\left(x\right)=\frac{1}{{x}^{2}}$ $f\left(x\right)=\sqrt[3]{x}$ $f\left(x\right)=\sqrt{x}$ $f\left(x\right)=|x|$ ### Solution The constant function is not one-to-one, and there is no domain (except a single point) on which it could be one-to-one, so the constant function has no meaningful inverse. The absolute value function can be restricted to the domain $\left[0,\infty \right)$ , where it is equal to the identity function. The reciprocal-squared function can be restricted to the domain $\left(0,\infty \right)$ . ### Analysis of the Solution We can see that these functions (if unrestricted) are not one-to-one by looking at their graphs. They both would fail the horizontal line test. However, if a function is restricted to a certain domain so that it passes the horizontal line test, then in that restricted domain, it can have an inverse. Figure 4. (a) Absolute value (b) Reciprocal squared ### Try It 4 The domain of function $f$ is $\left(1,\infty \right)$ and the range of function $f$ is $\left(\mathrm{-\infty },-2\right)$ . Find the domain and range of the inverse function. Solution ## Finding and Evaluating Inverse Functions Once we have a one-to-one function, we can evaluate its inverse at specific inverse function inputs or construct a complete representation of the inverse function in many cases. ## Inverting Tabular Functions Suppose we want to find the inverse of a function represented in table form. Remember that the domain of a function is the range of the inverse and the range of the function is the domain of the inverse. So we need to interchange the domain and range. Each row (or column) of inputs becomes the row (or column) of outputs for the inverse function. Similarly, each row (or column) of outputs becomes the row (or column) of inputs for the inverse function. ### Example 5: Interpreting the Inverse of a Tabular Function A function $f\left(t\right)$ is given below, showing distance in miles that a car has traveled in $t$ minutes. Find and interpret ${f}^{-1}\left(70\right)$ . $t\text{ (minutes)}$ 30 50 70 90 $f\left(t\right)\text{ (miles)}$ 20 40 60 70 ### Solution The inverse function takes an output of $f$ and returns an input for $f$ . So in the expression ${f}^{-1}\left(70\right)$ , 70 is an output value of the original function, representing 70 miles. The inverse will return the corresponding input of the original function $f$ , 90 minutes, so ${f}^{-1}\left(70\right)=90$ . The interpretation of this is that, to drive 70 miles, it took 90 minutes. Alternatively, recall that the definition of the inverse was that if $f\left(a\right)=b$ , then ${f}^{-1}\left(b\right)=a$ . By this definition, if we are given ${f}^{-1}\left(70\right)=a$ , then we are looking for a value $a$ so that $f\left(a\right)=70$ . In this case, we are looking for a $t$ so that $f\left(t\right)=70$ , which is when $t=90$ . ### Try It 5 Using the table below, find and interpret (a) $\text{ }f\left(60\right)$ , and (b) $\text{ }{f}^{-1}\left(60\right)$ . $t\text{ (minutes)}$ 30 50 60 70 90 $f\left(t\right)\text{ (miles)}$ 20 40 50 60 70 Solution ## Evaluating the Inverse of a Function, Given a Graph of the Original Function We saw in Functions and Function Notation that the domain of a function can be read by observing the horizontal extent of its graph. We find the domain of the inverse function by observing the vertical extent of the graph of the original function, because this corresponds to the horizontal extent of the inverse function. Similarly, we find the range of the inverse function by observing the horizontal extent of the graph of the original function, as this is the vertical extent of the inverse function. If we want to evaluate an inverse function, we find its input within its domain, which is all or part of the vertical axis of the original function’s graph. ### How To: Given the graph of a function, evaluate its inverse at specific points. 1. Find the desired input on the y-axis of the given graph. 2. Read the inverse function’s output from the x-axis of the given graph. ### Example 6: Evaluating a Function and Its Inverse from a Graph at Specific Points A function $g\left(x\right)$ is given in Figure 5. Find $g\left(3\right)$ and ${g}^{-1}\left(3\right)$ . Figure 5 ### Solution To evaluate $g\left(3\right)$ , we find 3 on the x-axis and find the corresponding output value on the y-axis. The point $\left(3,1\right)$ tells us that $g\left(3\right)=1$ . To evaluate ${g}^{-1}\left(3\right)$ , recall that by definition ${g}^{-1}\left(3\right)$ means the value of x for which $g\left(x\right)=3$ . By looking for the output value 3 on the vertical axis, we find the point $\left(5,3\right)$ on the graph, which means $g\left(5\right)=3$ , so by definition, ${g}^{-1}\left(3\right)=5$ . Figure 6 ### Try It 6 Using the graph in Example 6, (a) find ${g}^{-1}\left(1\right)$ , and (b) estimate ${g}^{-1}\left(4\right)$ . Solution ## Finding Inverses of Functions Represented by Formulas Sometimes we will need to know an inverse function for all elements of its domain, not just a few. If the original function is given as a formula— for example, $y$ as a function of $x\text{-\hspace{0.17em}}$ we can often find the inverse function by solving to obtain $x$ as a function of $y$ . ### How To: Given a function represented by a formula, find the inverse. 1. Make sure $f$ is a one-to-one function. 2. Solve for $x$ . 3. Interchange $x$ and $y$ . ### Example 7: Inverting the Fahrenheit-to-Celsius Function Find a formula for the inverse function that gives Fahrenheit temperature as a function of Celsius temperature. $C=\frac{5}{9}\left(F - 32\right)$ ### Solution $\begin{cases}\qquad{ C }=\frac{5}{9}\left(F - 32\right)\qquad \\ C\cdot \frac{9}{5}=F - 32\qquad \\ F=\frac{9}{5}C+32\qquad \end{cases}$ By solving in general, we have uncovered the inverse function. If $C=h\left(F\right)=\frac{5}{9}\left(F - 32\right)$ , then $F={h}^{-1}\left(C\right)=\frac{9}{5}C+32$ . In this case, we introduced a function $h$ to represent the conversion because the input and output variables are descriptive, and writing ${C}^{-1}$ could get confusing. ### Try It 7 Solve for $x$ in terms of $y$ given $y=\frac{1}{3}\left(x - 5\right)$ Solution ### Example 8: Solving to Find an Inverse Function Find the inverse of the function $f\left(x\right)=\frac{2}{x - 3}+4$ . ### Solution $\begin{cases}y=\frac{2}{x - 3}+4\qquad & \text{Set up an equation}.\qquad \\ y - 4=\frac{2}{x - 3}\qquad & \text{Subtract 4 from both sides}.\qquad \\ x - 3=\frac{2}{y - 4}\qquad & \text{Multiply both sides by }x - 3\text{ and divide by }y - 4.\qquad \\ x=\frac{2}{y - 4}+3\qquad & \text{Add 3 to both sides}.\qquad \end{cases}$ So ${f}^{-1}\left(y\right)=\frac{2}{y - 4}+3$ or ${f}^{-1}\left(x\right)=\frac{2}{x - 4}+3$ . ### Analysis of the Solution The domain and range of $f$ exclude the values 3 and 4, respectively. $f$ and ${f}^{-1}$ are equal at two points but are not the same function, as we can see by creating the table below. $x$ 1 2 5 ${f}^{-1}\left(y\right)$ $f\left(x\right)$ 3 2 5 $y$ ### Example 9: Solving to Find an Inverse with Radicals Find the inverse of the function $f\left(x\right)=2+\sqrt{x - 4}$ . ### Solution $\begin{cases}y=2+\sqrt{x - 4}\qquad \\ {\left(y - 2\right)}^{2}=x - 4\qquad \\ x={\left(y - 2\right)}^{2}+4\qquad \end{cases}$ So ${f}^{-1}\left(x\right)={\left(x - 2\right)}^{2}+4$ . The domain of $f$ is $\left[4,\infty \right)$ . Notice that the range of $f$ is $\left[2,\infty \right)$ , so this means that the domain of the inverse function ${f}^{-1}$ is also $\left[2,\infty \right)$ . ### Analysis of the Solution The formula we found for ${f}^{-1}\left(x\right)$ looks like it would be valid for all real $x$ . However, ${f}^{-1}$ itself must have an inverse (namely, $f$ ) so we have to restrict the domain of ${f}^{-1}$ to $\left[2,\infty \right)$ in order to make ${f}^{-1}$ a one-to-one function. This domain of ${f}^{-1}$ is exactly the range of $f$ . ### Try It 8 What is the inverse of the function $f\left(x\right)=2-\sqrt{x}?$ State the domains of both the function and the inverse function. Solution Now that we can find the inverse of a function, we will explore the graphs of functions and their inverses. Let us return to the quadratic function $f\left(x\right)={x}^{2}$ restricted to the domain $\left[0,\infty \right)$ , on which this function is one-to-one, and graph it as in Figure 7. Figure 7. Quadratic function with domain restricted to [0, ∞). Restricting the domain to $\left[0,\infty \right)$ makes the function one-to-one (it will obviously pass the horizontal line test), so it has an inverse on this restricted domain. We already know that the inverse of the toolkit quadratic function is the square root function, that is, ${f}^{-1}\left(x\right)=\sqrt{x}$ . What happens if we graph both $f\text{ }$ and ${f}^{-1}$ on the same set of axes, using the $x\text{-}$ axis for the input to both $f\text{ and }{f}^{-1}?$ We notice a distinct relationship: The graph of ${f}^{-1}\left(x\right)$ is the graph of $f\left(x\right)$ $y=x$ , which we will call the identity line, shown in Figure 8. Figure 8. Square and square-root functions on the non-negative domain This relationship will be observed for all one-to-one functions, because it is a result of the function and its inverse swapping inputs and outputs. This is equivalent to interchanging the roles of the vertical and horizontal axes. ### Example 10: Finding the Inverse of a Function Using Reflection about the Identity Line Given the graph of $f\left(x\right)$ , sketch a graph of ${f}^{-1}\left(x\right)$ . Figure 9 ### Solution This is a one-to-one function, so we will be able to sketch an inverse. Note that the graph shown has an apparent domain of $\left(0,\infty \right)$ and range of $\left(-\infty ,\infty \right)$ , so the inverse will have a domain of $\left(-\infty ,\infty \right)$ and range of $\left(0,\infty \right)$ . If we reflect this graph over the line $y=x$ , the point $\left(1,0\right)$ reflects to $\left(0,1\right)$ and the point $\left(4,2\right)$ reflects to $\left(2,4\right)$ . Sketching the inverse on the same axes as the original graph gives us the result in Figure 10. Figure 10. The function and its inverse, showing reflection about the identity line ### Try It 9 Draw graphs of the functions $f\text{ }$ and $\text{ }{f}^{-1}$ . Solution Q & A Is there any function that is equal to its own inverse? Yes. If $f={f}^{-1}$ , then $f\left(f\left(x\right)\right)=x$ , and we can think of several functions that have this property. The identity function does, and so does the reciprocal function, because $\frac{1}{\frac{1}{x}}=x$ Any function $f\left(x\right)=c-x$ , where $c$ is a constant, is also equal to its own inverse. ## Key Concepts • If $g\left(x\right)$ is the inverse of $f\left(x\right)$ , then • $g\left(f\left(x\right)\right)=f\left(g\left(x\right)\right)=x$ . • Each of the toolkit functions has an inverse. • For a function to have an inverse, it must be one-to-one (pass the horizontal line test). • A function that is not one-to-one over its entire domain may be one-to-one on part of its domain. • For a tabular function, exchange the input and output rows to obtain the inverse. • The inverse of a function can be determined at specific points on its graph. • To find the inverse of a formula, solve the equation $y=f\left(x\right)$ for $x$ as a function of $y$ . Then exchange the labels $x$ and $y$ . • The graph of an inverse function is the reflection of the graph of the original function across the line $y=x$ . ## Glossary inverse function for any one-to-one function $f\left(x\right)$ , the inverse is a function ${f}^{-1}\left(x\right)$ such that ${f}^{-1}\left(f\left(x\right)\right)=x$ for all $x$ in the domain of $f$ ; this also implies that $f\left({f}^{-1}\left(x\right)\right)=x$ for all $x$ in the domain of ${f}^{-1}$ ## Problem Set 1. Describe why the horizontal line test is an effective way to determine whether a function is one-to-one? 2. Why do we restrict the domain of the function $f\left(x\right)={x}^{2}$ to find the function’s inverse? 3. Can a function be its own inverse? Explain. 4. Are one-to-one functions either always increasing or always decreasing? Why or why not? 5. How do you find the inverse of a function algebraically? 6. Show that the function $f\left(x\right)=a-x$ is its own inverse for all real numbers $a$ . For the following exercises, find ${f}^{-1}\left(x\right)$ for each function. 7. $f\left(x\right)=x+3$ 8. $f\left(x\right)=x+5$ 9. $f\left(x\right)=2-x$ 10. $f\left(x\right)=3-x$ 11. $f\left(x\right)=\frac{x}{x+2}$ 12. $f\left(x\right)=\frac{2x+3}{5x+4}$ For the following exercises, find a domain on which each function $f$ is one-to-one and non-decreasing. Write the domain in interval notation. Then find the inverse of $f$ restricted to that domain. 13. $f\left(x\right)={\left(x+7\right)}^{2}$ 14. $f\left(x\right)={\left(x - 6\right)}^{2}$ 15. $f\left(x\right)={x}^{2}-5$ 16. Given $f\left(x\right)=\frac{x}{2}+x$ and $g\left(x\right)=\frac{2x}{1-x}$ a. Find $f\left(g\left(x\right)\right)$ and $g\left(f\left(x\right)\right)$ $f\left(x\right)$ and $g\left(x\right)?$ For the following exercises, use function composition to verify that $f\left(x\right)$ and $g\left(x\right)$ are inverse functions. 17. $f\left(x\right)=\sqrt[3]{x - 1}$ and $g\left(x\right)={x}^{3}+1$ 18. $f\left(x\right)=-3x+5$ and $g\left(x\right)=\frac{x - 5}{-3}$ For the following exercises, use a graphing utility to determine whether each function is one-to-one. 19. $f\left(x\right)=\sqrt{x}$ 20. $f\left(x\right)=\sqrt[3]{3x+1}$ 21. $f\left(x\right)=-5x+1$ 22. $f\left(x\right)={x}^{3}-27$ For the following exercises, determine whether the graph represents a one-to-one function. 23. 24. For the following exercises, use the graph of $f$ 25. Find $f\left(0\right)$ . 26. Solve $f\left(x\right)=0$ . 27. Find ${f}^{-1}\left(0\right)$ . 28. Solve ${f}^{-1}\left(x\right)=0$ . For the following exercises, use the graph of the one-to-one function shown below. 29. Sketch the graph of ${f}^{-1}$ . 30. Find $f\left(6\right)\text{ and }{f}^{-1}\left(2\right)$ . 31. If the complete graph of $f$ is shown, find the domain of $f$ . 32. If the complete graph of $f$ is shown, find the range of $f$ . For the following exercises, evaluate or solve, assuming that the function $f$ is one-to-one. 33. If $f\left(6\right)=7$ , find ${f}^{-1}\left(7\right)$ . 34. If $f\left(3\right)=2$ , find ${f}^{-1}\left(2\right)$ . 35. If ${f}^{-1}\left(-4\right)=-8$ , find $f\left(-8\right)$ . 36. If ${f}^{-1}\left(-2\right)=-1$ , find $f\left(-1\right)$ . For the following exercises, use the values listed in the table below to evaluate or solve. $x$ $f\left(x\right)$ 0 8 1 0 2 7 3 4 4 2 5 6 6 5 7 3 8 9 9 1 37. Find $f\left(1\right)$ . 38. Solve $f\left(x\right)=3$ . 39. Find ${f}^{-1}\left(0\right)$ . 40. Solve ${f}^{-1}\left(x\right)=7$ . 41. Use the tabular representation of $f$ to create a table for ${f}^{-1}\left(x\right)$ . $x$ 3 6 9 13 14 $f\left(x\right)$ 1 4 7 12 16 For the following exercises, find the inverse function. Then, graph the function and its inverse. 42. $f\left(x\right)=\frac{3}{x - 2}$ 43. $f\left(x\right)={x}^{3}-1$ 44. Find the inverse function of $f\left(x\right)=\frac{1}{x - 1}$ . Use a graphing utility to find its domain and range. Write the domain and range in interval notation. 45. To convert from $x$ degrees Celsius to $y$ degrees Fahrenheit, we use the formula $f\left(x\right)=\frac{9}{5}x+32$ . Find the inverse function, if it exists, and explain its meaning. 46. The circumference $C$ of a circle is a function of its radius given by $C\left(r\right)=2\pi r$ . Express the radius of a circle as a function of its circumference. Call this function $r\left(C\right)$ . Find $r\left(36\pi \right)$ and interpret its meaning. 47. A car travels at a constant speed of 50 miles per hour. The distance the car travels in miles is a function of time, $t$ , in hours given by $d\left(t\right)=50t$ . Find the inverse function by expressing the time of travel in terms of the distance traveled. Call this function $t\left(d\right)$ . Find $t\left(180\right)$ and interpret its meaning.
{}
# Quadratic Lie Algebras - Mathematics > Quantum Algebra Abstract: In this paper, the notion of universal enveloping algebra introduced in A.Ardizzoni, \emph{A First Sight Towards Primitively Generated Connected BraidedBialgebras}, submitted. arXiv:0805.3391v3 is specialized to the case ofbraided vector spaces whose Nichols algebra is quadratic as an algebra. In thissetting a classification of universal enveloping algebras for braided vectorspaces of dimension not greater than 2 is handled. As an application, weinvestigate the structure of primitively generated connected braided bialgebraswhose braided vector space of primitive elements forms a Nichols algebra whichis quadratic algebra. Author: Alessandro Ardizzoni, Fabio Stumbo Source: https://arxiv.org/
{}
# Bold font weight for LaTeX axes label in matplotlib In matplotlib you can make the text of an axis label bold by plt.xlabel('foo',fontweight='bold') You can also use LaTeX with the right backend plt.xlabel(r'$\phi$') When you combine them however, the math text is not bold anymore plt.xlabel(r'$\phi$',fontweight='bold') Nor do the following LaTeX commands seem to have any effect plt.xlabel(r'$\bf \phi$') plt.xlabel(r'$\mathbf{\phi}$') How can I make a bold $\phi$ in my axis label? - Unfortunately you can't bold symbols using the bold font, see this question on tex.stackexchange. As the answer suggests, you could use \boldsymbol to bold phi: r'$\boldsymbol{\phi}$' You'll need to load amsmath into the TeX preamble: matplotlib.rc('text', usetex=True) matplotlib.rcParams['text.latex.preamble']=[r"\usepackage{amsmath}"] - This does not work: ValueError: \boldsymbol{\phi} ^ Unknown symbol: \boldsymbol (at char 0), (line:1, col:1) perhaps it requires that we need amsmath loaded? Have you tested this on your machine? –  Hooked Jan 14 '13 at 19:15 @Hooked I think including a preamble should work as described here: matplotlib.rc('text', usetex=True), matplotlib.rcParams['text.latex.preamble']=[r"\usepackage{amsmath}"]. Unfortunately I can't test yet, will update when I have. –  Andy Hayden Jan 14 '13 at 19:52 If you intend to have consistently bolded fonts throughout the plot, the best way may be to enable latex and add \boldmath to your preamble: # Optionally set font to Computer Modern to avoid common missing font errors matplotlib.rc('font', family='serif', serif='cm10') matplotlib.rc('text', usetex=True) matplotlib.rcParams['text.latex.preamble'] = [r'\boldmath'] Then your axis or figure labels can have any mathematical latex expression and still be bold: plt.xlabel(r'$\frac{\phi + x}{2}$') However, for portions of labels that are not mathematical, you'll need to explicitly set them as bold: plt.ylabel(r'\textbf{Counts of} $\lambda$'} -
{}
Calculus Examples Find the Sum of the Infinite Geometric Series , , Step 1 This is a geometric sequence since there is a common ratio between each term. In this case, multiplying the previous term in the sequence by gives the next term. In other words, . Geometric Sequence: Step 2 The sum of a series is calculated using the formula . For the sum of an infinite geometric series , as approaches , approaches . Thus, approaches . Step 3 The values and can be put in the equation . Step 4 Simplify the equation to find . Simplify the denominator. Write as a fraction with a common denominator. Combine the numerators over the common denominator. Subtract from . Multiply the numerator by the reciprocal of the denominator. Cancel the common factor of . Factor out of . Cancel the common factor. Rewrite the expression. Multiply by .
{}
# Step down 48v input POE to 4 outputs 3.3v/5v/12v/custom #### StealthRT Joined Mar 20, 2009 266 Hey all I have been searching for a already-built solution that would output 3.3vdc, 5vdc and 12vdc @ 5a. I have found that here: However the issue is that I will be feeding this 48vdc (POE) and it seems that the max this thing can take in is 40v. Do you think it will be fine with 48v? Is there anything I could add to the board so that it can take up to 50v? LM2596 datasheet: https://www.ti.com/document-viewer/LM2596/datasheet *** Across post:
{}
## MECA member companies sold 20,177 diesel retrofits in the US 2011 ##### 21 May 2012 The total number of verified (US EPA- and/or California ARB-verified) diesel retrofit devices (for both on-road and off-road diesel engines) sold in the US (including California) by MECA member companies in 2011 was 20,177, according to the Manufacturers of Emission Controls Association (MECA). Overall, these annual retrofit sales numbers are relatively small compared to the total number of diesel engines currently operating in the US (up to 20 million based on EPA estimates), MECA noted. Of the 2011 total: • 57% (11,506) were diesel particulate filters (DPFs) (includes both passively regenerated and actively regenerated filters); • 23% (4,663) were diesel oxidation catalysts (DOCs); and • 4% (881) were flow-through filters (FTFs). • This total also includes 3,127 closed-crankcase filters. In California, 7,558 diesel retrofit devices were sold, of which 89% (6,729) were DPFs and 11% (805) were FTFs. Sector-wise, in the US (including California), 17,506 diesel retrofit devices were sold for on-road diesel engines and 2,671 for off-road diesel engines. By comparison, MECA member companies sold 29,180 diesel retrofit devices in 2009 and 24,640 in 2010. For DPFs specifically, the number sold in the US (for both on-road and off-road diesel engines) has increased slightly since 2009 (outside of California, 3,329 in 2009, 4,428 in 2010, and 4,777 in 2011; in California, 4,962 in 2009, 5,745 in 2010, and 6,729 in 2011). For DOCs, sales in the US (for both on-road and off-road diesel engines) have decreased significantly (11,906 in 2009, 9,926 in 2010, and 4,663 in 2011). The decline in retrofit sales since 2009, especially for DOCs, is most likely due to the decrease in federal funding for clean diesel projects over the same time period, as well as the recent trend of funding being spent more on projects that use engine repowers and/or vehicle replacements rather than retrofit devices, MECA said. DPF sales, although increasing slightly, were expected to be much higher in 2011, especially in California due to the requirements of ARB’s in-use truck and bus regulation (ARB projected that up to 100,000 retrofit DPFs could be installed over the 2011-2014 timeframe to comply with the regulation). In addition, ARB’s in-use off-road diesel vehicle regulation was expected to generate additional demand for DPFs, but amendments to the regulation approved in December 2010 meant to give fleets more time to comply due to the economic recession continue to depress the retrofit market opportunity for off-road diesel engines in the state. Federal funding from the Diesel Emissions Reduction Act (DERA) through EPA’s National Clean Diesel Campaign (approximately $531 million appropriated from FY 2007 to FY 2011, including$300 million from the American Recovery and Reinvestment Act of 2009) has helped provide much-needed funding and financial incentives for many clean diesel projects; however, more dedicated and innovative funding is needed to clean up all of the diesel engines in the existing fleet, especially the large amount of older diesel engines in the off-road sector, MECA says. DERA was re-authorized at the end of 2010 for FY 2012-2016, but only $30 million was appropriated to EPA for DERA for FY 2012 and the President’s budget request for FY 2013 currently only includes$15 million for DERA. As EPA moves forward with its new five-year clean diesel strategy, we encourage the agency to promote, where technically feasible, the use of the best available retrofit technology that has been verified by EPA and/or ARB (i.e., DPFs for control of particulate matter emissions), as well as to promote the multi-pollutant benefits that retrofits in general can provide (e.g., black carbon reductions from DPFs and air toxics reductions from catalyzed filters and DOCs). Additional clean diesel funding and incentives at the federal and state level, combined with effective enforcement of California’s various in-use fleet regulations, are key strategies that are needed to drive growth in the diesel retrofit industry. —MECA’s executive director, Joseph Kubsh MECA is a non-profit association incorporated in Washington, DC, formed in 1976 to provide solid technical information on emission control technology for motor vehicles. As the emission control industry grew in the subsequent decades, and as its member companies expanded control technology products to other sources, MECA’s mission expanded as well. Today, MECA’s members include leading manufacturers of a variety of emission control equipment for: 1) automobiles, trucks, and buses; 2) off-road vehicles; and 3) stationary sources.
{}
Another maths curiosity from the Futility Closet: Fortuitous numbers In American usage, 84,672 is said EIGHTY FOUR THOUSAND SIX HUNDRED SEVENTY TWO. Count the letters in each of those words, multiply the counts, and you get 6 × 4 × 8 × 3 × 7 × 7 × 3 = 84,672. Here’s something I’ve (pointlessly) struggled with for a long time, now. Can you complete this sentence? Written as words, there are _____________ letters in this sentence. Use Excel’s LEN() function and AutoSum and try it like this, writing it out one word at a time. So, forty three letters so far, with those two empty boxes. If you were to write forty three into those boxes, the total would obviously be more than forty three. A little trial-and-error, and we get the answer fifty three. Well, that was fairly straightforward. Let’s try a slightly different sentence. Maybe this isn’t so difficult, after all. One more? That’s not right, there are forty nine letters in that sentence, not forty eight. But now there are forty eight. Is it not possible to accurately complete that sentence, then?
{}
### Home > A2C > Chapter 12 > Lesson 12.1.4 > Problem12-69 12-69. Graph each complex number and find its absolute value. Remember that the absolute value of a complex number is the same as its distance from the origin. 1. $−2i$ How far is this point away from $0$ on your graph? The absolute value is $2$. 1. $−3 + 4i$ How far is this point away from 0 on your graph? The Pythagorean Theorem will be helpful here. The absolute value is $5$. 1. $−2 − 2i$ Look at parts (a) and (b). 1. $4$ Looks at parts (a) and (b).
{}
# zbMATH — the first resource for mathematics Optimal Hardy-Littlewood inequalities uniformly bounded by a universal constant. (English) Zbl 1410.46027 The original Hardy-Littlewood inequality, published in [G. H. Hardy and J. E. Littlewood, Q. J. Math., Oxf. Ser. 5, 241–254 (1934; JFM 60.0335.01)], relates some $$\ell_{r}$$-norm of the coefficients of a bilinear form on $$\mathbb{C}^{n} \times \mathbb{C}^{n}$$ with the supremum on $$B_{\ell_{p}^{n}} \times B_{\ell_{q}^{n}}$$. This was later extended to $$m$$-linear forms. This article contributes in this direction. Given $$1 < p_{1}, \ldots , p_{m} \leq \infty$$, denote $$\frac{1}{\mathbf{p}} := \frac{1}{p_{1}} + \cdots + \frac{1}{p_{m}}$$ (here we use the convention $$\frac{1}{\infty}=0$$). If $$\frac{1}{2} \leq \frac{1}{\mathbf{p}} <1$$, then, for every $$n \in \mathbb{N}$$ and every $$m$$-linear $$T : \mathbb{C}^{n} \times \cdots \times \mathbb{C}^{n} \to \mathbb{C}$$, we have $\bigg( \sum_{i_{1}, \dots , i_{m}=1}^{n} | T(e_{i_{1}}, \dots , e_{i_{m}}) |^{\frac{1}{1-\frac{1}{\mathbf{p}}}} \bigg)^{1-\frac{1}{\mathbf{p}}} \leq 2^{(m-1) \big( 1 - \frac{1}{\mathbf{p} }\big)} \sup_{\substack{ {\| x_{j} \|_{p_{j}} < 1}\\ {j=1, \dots ,m}}} | T(x_{1} , \ldots , x_{m}) | .$ Some other similar results, involving mixed sums, are also given. ##### MSC: 46G25 (Spaces of) multilinear mappings, polynomials 47H60 Multilinear and polynomial operators Full Text: ##### References: [1] Dahmane Achour, Elhadj Dahia, Pilar Rueda and Enrique A. Sánchez-Pérez. Domination spaces and factorization of linear and multilinear summing operators. Quaest. Math., 39(8):1071-1092, 2016. [2] Nacib Albuquerque, Frederic Bayart, Daniel Pellegrino and Juan B. Seoane-Sepúlveda. Sharp generalizations of the multilinear Bohnenblust-Hille inequality. J. Funct. Anal., 266(6):3726-3740, 2014. · Zbl 1319.46035 [3] Nacib Albuquerque, Frederic Bayart, Daniel Pellegrino and Juan B. Seoane-Sepúlveda. Optimal Hardy-Littlewood type inequalities for polynomials and multilinear operators. Isr. J. Math., 211(1):197-220, 2016. · Zbl 1342.26040 [4] Nacib Albuquerque and Lisiane Rezende. Anisotropic regularity principle in sequence spaces and applications. , to appear in $$Commun. Contemp. Math.$$ · Zbl 1411.46037 [5] Gustavo Araújo and Daniel Pellegrino. On the constants of the Bohnenblust-Hille and Hardy-Littlewood inequalities. Bull. Braz. Math. Soc. (N.S.), 48(1):141-169, 2017. · Zbl 06767894 [6] Gustavo Araújo, Daniel Pellegrino and Diogo Diniz P. da Silva e Silva. On the upper bounds for the constants of the Hardy-Littlewood inequality. J. Funct. Anal., 267(6):1878-1888, 2014. · Zbl 1298.26066 [7] Richard Aron, Daniel Núñez-Alarcón, Daniel Pellegrino and Diana M. Serrano-Rodríguez. Optimal exponents for Hardy-Littlewood inequalities for m-linear operators. Linear Algebra Appl., 531:399-422, 2017. · Zbl 06770622 [8] Frédéric Bayart. Multiple summing maps: Coordinatewise summability, inclusion theorems and $$p$$-Sidon sets. J. Funct. Anal., 274(4):1129-1154, 2018. · Zbl 1391.46057 [9] Frédéric Bayart, Daniel Pellegrino and Juan B. Seoane-Sepúlveda. The Bohr radius of the n-dimensional polydisk is equivalent to $$\sqrt{\frac{\log n}{n}}$$. Adv. Math., 264:726-746, 2014. · Zbl 1331.46037 [10] H. Frederic Bohnenblust and Einar Hille. On the absolute convergence of Dirichlet series. Ann. Math., 32(3):600-622, 1931. · JFM 57.0266.05 [11] Qingying Bu and Coenraad C.A. Labuschagne. Positive multiple summing and concave multilinear operators on Banach lattices. Mediterr. J. Math., 12(1):77-87, 2015. · Zbl 1331.46038 [12] Wasthenny Cavalcante. Some applications of the regularity principle in sequence spaces. Positivity, 22(1):191-198, 2018. · Zbl 06861658 [13] Wasthenny Cavalcante and Daniel Núñez-Alarcón. Remarks on an inequality of Hardy and Littlewood. Quaest. Math., 39(8):1101-1113, 2016. [14] Andreas Defant and Pablo Sevilla-Peris. A new multilinear insight on Littlewood’s 4/3-inequality. J. Funct. Anal., 256(5):1642-1664, 2009. · Zbl 1171.46034 [15] Olvido Delgado and Enrique A. Sánchez-Pérez. Strong extensions for $$q$$-summing operators acting in $$p$$-convex Banach function spaces for $$1 \leq p \leq q$$. Positivity, 20(4):999-1014, 2016. · Zbl 1436.46032 [16] Joe Diestel, Hans Jarchow and Andrew Tonge. $$Absolutely summing operators$$, volume 43 of $$Cambridge Studies in Advanced Mathematics$$. Cambridge University Press, 1995 · Zbl 0855.47016 [17] Verónica Dimant and Pablo Sevilla-Peris. Summation of Coefficients of Polynomials on $$\ell_p$$ Spaces. Publ. Mat., Barc., 60(2):289-310, 2016. · Zbl 1378.46032 [18] Godfrey Harold Hardy and John Edensor Littlewood. Bilinear forms bounded in space $$####$$ [19] John Edensor Littlewood. On bounded bilinear forms in an infinite number of variables. Quart. J. Math., 1(1):164-174, 1930. · JFM 56.0335.01 [20] Ashley Montanaro. Some applications of hypercontractive inequalities in quantum information theory. J. Math. Phys., 53(12), 2012. · Zbl 1278.81045 [21] Daniel Pellegrino. The optimal constants of the mixed $$(\ell_1, \ell_2)$$-Littlewood inequality. J. Number Theory, 160:11-18, 2016. · Zbl 1431.46024 [22] Daniel Pellegrino, Djair Santos and Joedson Santos. Optimal blow up rate for the constants of Khinchin type inequalities. Quaest. Math., 41(3):303-318, 2018. · Zbl 1391.60008 [23] Daniel Pellegrino, Joedson Santos, Diana M. Serrano-Rodríguez and Eduardo V. Teixeira. A regularity principle in sequence spaces and applications. Bull. Sci. Math., 141(8):802-837, 2017. · Zbl 1404.46041 [24] Daniel Pellegrino and Eduardo V. Teixeira. Towards sharp Bohnenblust-Hille constants. Commun. Contemp. Math., 20(3), 2018. · Zbl 1403.46037 [25] T. Praciano-Pereira. On bounded multilinear forms on a class of $$\ell_p$$ spaces. J. Math. Anal. Appl., 81(2):561-568, 1981. Published by the CNRS - UMR 6620 · Zbl 0497.46007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
{}
# If i = √-1, then how many values does i-2n have for different n ϵ Z? 1. One 2. Two 3. Four 4. Infinite Option 2 : Two ## Detailed Solution Concept: The value i or the concept of i is used in explaining and expressing complex numbers. Complex numbers are numbers with a real and imaginary part. The imaginary part is defined with the help of i. Basically, “i” is the imaginary part which is also called iota. Value of i is √-1 A negative value inside a square root signifies an imaginary value. All the basic arithmetic operators are applicable to imaginary numbers. On squaring an imaginary number, we obtain a negative value. i2 = -1, i3 = -i, i4 = 1 Calculation: Given i = √-1 ⇒ i2 = -1 ⇒ i-2n ⇒ (i2)-n ⇒ (-1)-n Since, n = 0, 1, 2, 3, ... When n = even then i-2n = 1 When n = odd then i-2n = -1 ∴ Total possible values of the i-2n for n ϵ Z is two which are - 1 and 1.
{}
## Wednesday, August 31, 2016 ### OpenStax: Doing OER effectively and sustainably. I recently discovered OpenStax and their collection of open source textbooks for classes at the high school and collegiate levels.  Their texts hit the major courses in math, science, and the social sciences—from calculus to physics to economics.  For readers of this blog, largely interested in mathematics, the three-semester calculus text, derived from Gil Strang's Calculus, is a high quality text rendered in both pdf and html (and soon in print).  Having taught calculus off and on for the better part of 25 years, this free text is at or near the quality of Stewart's. Having also contributed a bit to Guichard's Calculus book, another good open source calculus text, I was particularly impressed by the high-level of production apparent in the OpenStax calculus book. I know it's pretty difficult for a volunteer corps to create such a high-quality text.  Curious about how such a high-quality open source text came to be, I reached out to Anthony Palmiotto, Editorial Director at OpenStax, to learn a bit more about how they do things there. The OpenStax organization is a non-profit hosted at Rice University.  Its first open source text was released in 2012.  Since then their library has grown to around two dozen texts.  The consistency of presentation and quality across all of the texts reflects a high quality editorial process.  Their editorial philosophy is characterized by a focus on larger market classes like calculus, intro physics, intro biology, economics etc.  These are the areas where accessible, high-quality texts can have the most impact.  They focus on a single offering in each of the course areas. They want their textbooks to match closely the offerings in the traditional commercial markets for these course texts. Doing so makes it easier for textbook adopters to consider open source texts right next to their commercially offered competitors. To begin, each text has primary authors and contributing authors.  The authors and editors are paid for their work (unlike in many open-source publishing efforts).  There is a development phase and a maintenance phase.  During the maintenance phase, errata are collected from readers, authors and editors continuously.  Corrections are made incrementally to the on-line version and full updates to the texts (pdf) occur between the traditional academic terms (semesters) of most colleges and universities. Baseline funding comes from Rice University.  Additional funding for specific book projects is sought on a case-by-case basis through grants and donations.  OpenStax has received funding from a number of philanthropic organizations including the Bill and Melinda Gates Foundation.  They also receive funding from companies in the educational technology sector.  For example, WebAssign bundles OpenStax texts with its commercial offerings and supports OpenStax in exchange. OpenStax does not make money on the print book sales.  For the sake of accessibility, print books are sold at or near cost. Though it is notoriously difficult to assess uptake and adoption of open-source texts, the data that OpenStax receives, from its commercial partners, as well as from self-identified users, shows significant year over year usage growth according to Palmiotto. It should be noted too, that in the case of OpenStax, I do not use the term open source lightly.  In the true spirit of borrowing, sharing, modifying and redistributing, the source for the texts is hosted on the OpenStax CNX platform.  Users are free to fork an existing text and to modify it to suit their needs.  While OpenStax will conveniently host your forked text on their site, you are also free to download, modify, and redistribute the text on your platform of choice. In the end, though it was hard to get an accurate figure, the total cost of development for the OpenStax calculus text was between $500,000 and$1,000,000.  This goes a long way towards satisfying my curiosity about the high quality of the text.  In other words, it wasn't a volunteer effort, but rather a paid group of authors and developers.  It remains an open question whether a purely volunteer corps can produce a text of the same quality.  However, unlike a typical commercial text, the OpenStax calculus text could represent the foundation of a purely volunteer effort at maintenance and continued development moving forward. Regardless, all of the texts are significant contributions to the OER effort and help to lower costs and increase accessibility for students taking these courses. The OpenStax approach to open-source academic publishing is effective and sustainable.  It is a model for how open-source publishing can generate high-quality texts that can compete with commercial offerings.  In a subsequent posting, I'll discuss OpenStax's partner program for educational institutions that wish to make open educational resources a priority. ## Thursday, August 4, 2016 ### Expii creator hopes to harness the power of the OER community. Po-Shen Loh at Carnegie-Mellon is perhaps best known for his success coaching the US Math Olympiad team.  However, he is equally passionate about a project he started called Expii that he hopes will provide self-directed learning of mathematics using open-source contributions and a sophisticated dynamic learning system. Mathematical subject matter is organized in a collection of nodes that are determined by a core group of editors at Expii.  Navigating through the nodes eventually leads to a specific topic node.  Within that topic node resides a collection of explanations and exercises for that specific topic written by a volunteer contributor.  Explanations are constructed on the site's easy to use editor which recognizes LaTeX.  The site also allows one to embed instructional YouTube videos.  Exercises too are easily constructed within the site's builtin editor. There are two aspects to Expii that separate it from other open source on-line learning systems, the editorial approach and the dynamic learning algorithm. First, unlike the Wikipedia editorial approach, contributors do not typically edit other contributors' work.  Factual corrections to an existing contribution are welcome, but stylistic re-writes are discouraged.  Rather, if a contributor does not like the style of a particular explanation, that contributor is encouraged to submit a competing explanation in the style that they prefer.  Subsequently, the two (or more) explanations then compete with each other by receiving upvotes/downvotes from users of the site.  Explanations with more votes are listed first when a learner visits a particular node.  Prof. Loh hopes that this approach will provided a variety of explanations on different topics that will resonate with different learners. Second, the learning algorithm is a novel application of the ELO ranking system that is used in international chess competition.  In a sense, learners and exercises are in competition with each other and are ranked using an ELO-type system.  In this way, when a learner requests an exercise on a particular topic, their ranking is matched to an exercise with a nearby ranking.  As the learner improves, their ranking improves and they are presented with increasingly challenging exercises. At the moment, the Expii effort is funded by private investors and is a for-profit entity.  However, Prof. Loh says that the future business model will not include a paywall for access to the site.  Rather, one possible business model would be mediating the pairing of contributors and learners on Expii into paid, adhoc, teaching and tutoring relationships.  The core group is looking for investors and applying for federal grant money to keep the 3-year effort going.  There has been steady growth in both users and contributors, but it seems to not have reached "critical mass" yet. The site welcomes the addition of existing open-source materials as long as they are appropriately licensed.  For example, a Creative Commons license that allows commercial use would work. Prof. Loh states that a lot of his motivation for starting the site came from his work with the Math Olympiad and other mathematics competitions.  He recognizes that for the US (and other countries) to compete, there needs to be a large pool of talented mathematics students.  He hopes that Expii can deepen that pool.  Indeed, he hopes that it can take a struggling mathematics student that may not be getting quite what he/she needs in their home classroom and provide just the right kind of learning experience to turn them into future Math Olympians.  And, as a great side benefit, Expii will elevate the mathematical literacy across the board. If you have OERs that you have developed and would like a venue, Expii may be a good choice for you.  The ranking system may be the kind of objective assessment of your work that could be used in tenure, promotion and merit considerations at your institution.
{}
Copied to clipboard ## G = S3×D5⋊C8order 480 = 25·3·5 ### Direct product of S3 and D5⋊C8 Series: Derived Chief Lower central Upper central Derived series C1 — C15 — S3×D5⋊C8 Chief series C1 — C5 — C15 — C30 — C3×Dic5 — C3×C5⋊C8 — S3×C5⋊C8 — S3×D5⋊C8 Lower central C15 — S3×D5⋊C8 Upper central C1 — C4 Generators and relations for S3×D5⋊C8 G = < a,b,c,d,e | a3=b2=c5=d2=e8=1, bab=a-1, ac=ca, ad=da, ae=ea, bc=cb, bd=db, be=eb, dcd=c-1, ece-1=c3, ede-1=c2d > Subgroups: 676 in 152 conjugacy classes, 60 normal (36 characteristic) C1, C2, C2, C3, C4, C4, C22, C5, S3, S3, C6, C6, C8, C2×C4, C23, D5, D5, C10, C10, Dic3, Dic3, C12, C12, D6, D6, C2×C6, C15, C2×C8, C22×C4, Dic5, Dic5, C20, C20, D10, D10, C2×C10, C3⋊C8, C24, C4×S3, C4×S3, C2×Dic3, C2×C12, C22×S3, C5×S3, C3×D5, D15, C30, C22×C8, C5⋊C8, C5⋊C8, C4×D5, C4×D5, C2×Dic5, C2×C20, C22×D5, S3×C8, C2×C3⋊C8, C2×C24, S3×C2×C4, C5×Dic3, C3×Dic5, Dic15, C60, S3×D5, C6×D5, S3×C10, D30, D5⋊C8, D5⋊C8, C2×C5⋊C8, C2×C4×D5, S3×C2×C8, C3×C5⋊C8, C15⋊C8, D5×Dic3, S3×Dic5, D30.C2, D5×C12, S3×C20, C4×D15, C2×S3×D5, C2×D5⋊C8, S3×C5⋊C8, D15⋊C8, C3×D5⋊C8, C60.C4, C4×S3×D5, S3×D5⋊C8 Quotients: C1, C2, C4, C22, S3, C8, C2×C4, C23, D6, C2×C8, C22×C4, F5, C4×S3, C22×S3, C22×C8, C2×F5, S3×C8, S3×C2×C4, D5⋊C8, C22×F5, S3×C2×C8, S3×F5, C2×D5⋊C8, C2×S3×F5, S3×D5⋊C8 Smallest permutation representation of S3×D5⋊C8 On 120 points Generators in S120 (1 26 50)(2 27 51)(3 28 52)(4 29 53)(5 30 54)(6 31 55)(7 32 56)(8 25 49)(9 24 60)(10 17 61)(11 18 62)(12 19 63)(13 20 64)(14 21 57)(15 22 58)(16 23 59)(33 41 113)(34 42 114)(35 43 115)(36 44 116)(37 45 117)(38 46 118)(39 47 119)(40 48 120)(65 109 92)(66 110 93)(67 111 94)(68 112 95)(69 105 96)(70 106 89)(71 107 90)(72 108 91)(73 88 104)(74 81 97)(75 82 98)(76 83 99)(77 84 100)(78 85 101)(79 86 102)(80 87 103) (1 5)(2 6)(3 7)(4 8)(9 64)(10 57)(11 58)(12 59)(13 60)(14 61)(15 62)(16 63)(17 21)(18 22)(19 23)(20 24)(25 53)(26 54)(27 55)(28 56)(29 49)(30 50)(31 51)(32 52)(33 45)(34 46)(35 47)(36 48)(37 41)(38 42)(39 43)(40 44)(65 105)(66 106)(67 107)(68 108)(69 109)(70 110)(71 111)(72 112)(73 100)(74 101)(75 102)(76 103)(77 104)(78 97)(79 98)(80 99)(81 85)(82 86)(83 87)(84 88)(89 93)(90 94)(91 95)(92 96)(113 117)(114 118)(115 119)(116 120) (1 24 88 113 96)(2 114 17 89 81)(3 90 115 82 18)(4 83 91 19 116)(5 20 84 117 92)(6 118 21 93 85)(7 94 119 86 22)(8 87 95 23 120)(9 73 41 105 50)(10 106 74 51 42)(11 52 107 43 75)(12 44 53 76 108)(13 77 45 109 54)(14 110 78 55 46)(15 56 111 47 79)(16 48 49 80 112)(25 103 68 59 40)(26 60 104 33 69)(27 34 61 70 97)(28 71 35 98 62)(29 99 72 63 36)(30 64 100 37 65)(31 38 57 66 101)(32 67 39 102 58) (1 96)(2 81)(3 18)(4 116)(5 92)(6 85)(7 22)(8 120)(9 41)(11 52)(12 76)(13 45)(15 56)(16 80)(19 83)(20 117)(23 87)(24 113)(25 40)(26 69)(27 97)(28 62)(29 36)(30 65)(31 101)(32 58)(33 60)(34 70)(37 64)(38 66)(42 106)(44 53)(46 110)(48 49)(50 105)(51 74)(54 109)(55 78)(59 103)(63 99)(67 102)(71 98)(75 107)(79 111)(82 90)(86 94)(89 114)(93 118) (1 2 3 4 5 6 7 8)(9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24)(25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64)(65 66 67 68 69 70 71 72)(73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88)(89 90 91 92 93 94 95 96)(97 98 99 100 101 102 103 104)(105 106 107 108 109 110 111 112)(113 114 115 116 117 118 119 120) G:=sub<Sym(120)| (1,26,50)(2,27,51)(3,28,52)(4,29,53)(5,30,54)(6,31,55)(7,32,56)(8,25,49)(9,24,60)(10,17,61)(11,18,62)(12,19,63)(13,20,64)(14,21,57)(15,22,58)(16,23,59)(33,41,113)(34,42,114)(35,43,115)(36,44,116)(37,45,117)(38,46,118)(39,47,119)(40,48,120)(65,109,92)(66,110,93)(67,111,94)(68,112,95)(69,105,96)(70,106,89)(71,107,90)(72,108,91)(73,88,104)(74,81,97)(75,82,98)(76,83,99)(77,84,100)(78,85,101)(79,86,102)(80,87,103), (1,5)(2,6)(3,7)(4,8)(9,64)(10,57)(11,58)(12,59)(13,60)(14,61)(15,62)(16,63)(17,21)(18,22)(19,23)(20,24)(25,53)(26,54)(27,55)(28,56)(29,49)(30,50)(31,51)(32,52)(33,45)(34,46)(35,47)(36,48)(37,41)(38,42)(39,43)(40,44)(65,105)(66,106)(67,107)(68,108)(69,109)(70,110)(71,111)(72,112)(73,100)(74,101)(75,102)(76,103)(77,104)(78,97)(79,98)(80,99)(81,85)(82,86)(83,87)(84,88)(89,93)(90,94)(91,95)(92,96)(113,117)(114,118)(115,119)(116,120), (1,24,88,113,96)(2,114,17,89,81)(3,90,115,82,18)(4,83,91,19,116)(5,20,84,117,92)(6,118,21,93,85)(7,94,119,86,22)(8,87,95,23,120)(9,73,41,105,50)(10,106,74,51,42)(11,52,107,43,75)(12,44,53,76,108)(13,77,45,109,54)(14,110,78,55,46)(15,56,111,47,79)(16,48,49,80,112)(25,103,68,59,40)(26,60,104,33,69)(27,34,61,70,97)(28,71,35,98,62)(29,99,72,63,36)(30,64,100,37,65)(31,38,57,66,101)(32,67,39,102,58), (1,96)(2,81)(3,18)(4,116)(5,92)(6,85)(7,22)(8,120)(9,41)(11,52)(12,76)(13,45)(15,56)(16,80)(19,83)(20,117)(23,87)(24,113)(25,40)(26,69)(27,97)(28,62)(29,36)(30,65)(31,101)(32,58)(33,60)(34,70)(37,64)(38,66)(42,106)(44,53)(46,110)(48,49)(50,105)(51,74)(54,109)(55,78)(59,103)(63,99)(67,102)(71,98)(75,107)(79,111)(82,90)(86,94)(89,114)(93,118), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88)(89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104)(105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120)>; G:=Group( (1,26,50)(2,27,51)(3,28,52)(4,29,53)(5,30,54)(6,31,55)(7,32,56)(8,25,49)(9,24,60)(10,17,61)(11,18,62)(12,19,63)(13,20,64)(14,21,57)(15,22,58)(16,23,59)(33,41,113)(34,42,114)(35,43,115)(36,44,116)(37,45,117)(38,46,118)(39,47,119)(40,48,120)(65,109,92)(66,110,93)(67,111,94)(68,112,95)(69,105,96)(70,106,89)(71,107,90)(72,108,91)(73,88,104)(74,81,97)(75,82,98)(76,83,99)(77,84,100)(78,85,101)(79,86,102)(80,87,103), (1,5)(2,6)(3,7)(4,8)(9,64)(10,57)(11,58)(12,59)(13,60)(14,61)(15,62)(16,63)(17,21)(18,22)(19,23)(20,24)(25,53)(26,54)(27,55)(28,56)(29,49)(30,50)(31,51)(32,52)(33,45)(34,46)(35,47)(36,48)(37,41)(38,42)(39,43)(40,44)(65,105)(66,106)(67,107)(68,108)(69,109)(70,110)(71,111)(72,112)(73,100)(74,101)(75,102)(76,103)(77,104)(78,97)(79,98)(80,99)(81,85)(82,86)(83,87)(84,88)(89,93)(90,94)(91,95)(92,96)(113,117)(114,118)(115,119)(116,120), (1,24,88,113,96)(2,114,17,89,81)(3,90,115,82,18)(4,83,91,19,116)(5,20,84,117,92)(6,118,21,93,85)(7,94,119,86,22)(8,87,95,23,120)(9,73,41,105,50)(10,106,74,51,42)(11,52,107,43,75)(12,44,53,76,108)(13,77,45,109,54)(14,110,78,55,46)(15,56,111,47,79)(16,48,49,80,112)(25,103,68,59,40)(26,60,104,33,69)(27,34,61,70,97)(28,71,35,98,62)(29,99,72,63,36)(30,64,100,37,65)(31,38,57,66,101)(32,67,39,102,58), (1,96)(2,81)(3,18)(4,116)(5,92)(6,85)(7,22)(8,120)(9,41)(11,52)(12,76)(13,45)(15,56)(16,80)(19,83)(20,117)(23,87)(24,113)(25,40)(26,69)(27,97)(28,62)(29,36)(30,65)(31,101)(32,58)(33,60)(34,70)(37,64)(38,66)(42,106)(44,53)(46,110)(48,49)(50,105)(51,74)(54,109)(55,78)(59,103)(63,99)(67,102)(71,98)(75,107)(79,111)(82,90)(86,94)(89,114)(93,118), (1,2,3,4,5,6,7,8)(9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24)(25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72)(73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88)(89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104)(105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120) ); G=PermutationGroup([[(1,26,50),(2,27,51),(3,28,52),(4,29,53),(5,30,54),(6,31,55),(7,32,56),(8,25,49),(9,24,60),(10,17,61),(11,18,62),(12,19,63),(13,20,64),(14,21,57),(15,22,58),(16,23,59),(33,41,113),(34,42,114),(35,43,115),(36,44,116),(37,45,117),(38,46,118),(39,47,119),(40,48,120),(65,109,92),(66,110,93),(67,111,94),(68,112,95),(69,105,96),(70,106,89),(71,107,90),(72,108,91),(73,88,104),(74,81,97),(75,82,98),(76,83,99),(77,84,100),(78,85,101),(79,86,102),(80,87,103)], [(1,5),(2,6),(3,7),(4,8),(9,64),(10,57),(11,58),(12,59),(13,60),(14,61),(15,62),(16,63),(17,21),(18,22),(19,23),(20,24),(25,53),(26,54),(27,55),(28,56),(29,49),(30,50),(31,51),(32,52),(33,45),(34,46),(35,47),(36,48),(37,41),(38,42),(39,43),(40,44),(65,105),(66,106),(67,107),(68,108),(69,109),(70,110),(71,111),(72,112),(73,100),(74,101),(75,102),(76,103),(77,104),(78,97),(79,98),(80,99),(81,85),(82,86),(83,87),(84,88),(89,93),(90,94),(91,95),(92,96),(113,117),(114,118),(115,119),(116,120)], [(1,24,88,113,96),(2,114,17,89,81),(3,90,115,82,18),(4,83,91,19,116),(5,20,84,117,92),(6,118,21,93,85),(7,94,119,86,22),(8,87,95,23,120),(9,73,41,105,50),(10,106,74,51,42),(11,52,107,43,75),(12,44,53,76,108),(13,77,45,109,54),(14,110,78,55,46),(15,56,111,47,79),(16,48,49,80,112),(25,103,68,59,40),(26,60,104,33,69),(27,34,61,70,97),(28,71,35,98,62),(29,99,72,63,36),(30,64,100,37,65),(31,38,57,66,101),(32,67,39,102,58)], [(1,96),(2,81),(3,18),(4,116),(5,92),(6,85),(7,22),(8,120),(9,41),(11,52),(12,76),(13,45),(15,56),(16,80),(19,83),(20,117),(23,87),(24,113),(25,40),(26,69),(27,97),(28,62),(29,36),(30,65),(31,101),(32,58),(33,60),(34,70),(37,64),(38,66),(42,106),(44,53),(46,110),(48,49),(50,105),(51,74),(54,109),(55,78),(59,103),(63,99),(67,102),(71,98),(75,107),(79,111),(82,90),(86,94),(89,114),(93,118)], [(1,2,3,4,5,6,7,8),(9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24),(25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64),(65,66,67,68,69,70,71,72),(73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88),(89,90,91,92,93,94,95,96),(97,98,99,100,101,102,103,104),(105,106,107,108,109,110,111,112),(113,114,115,116,117,118,119,120)]]) 60 conjugacy classes class 1 2A 2B 2C 2D 2E 2F 2G 3 4A 4B 4C 4D 4E 4F 4G 4H 5 6A 6B 6C 8A ··· 8H 8I ··· 8P 10A 10B 10C 12A 12B 12C 12D 15 20A 20B 20C 20D 24A ··· 24H 30 60A 60B order 1 2 2 2 2 2 2 2 3 4 4 4 4 4 4 4 4 5 6 6 6 8 ··· 8 8 ··· 8 10 10 10 12 12 12 12 15 20 20 20 20 24 ··· 24 30 60 60 size 1 1 3 3 5 5 15 15 2 1 1 3 3 5 5 15 15 4 2 10 10 5 ··· 5 15 ··· 15 4 12 12 2 2 10 10 8 4 4 12 12 10 ··· 10 8 8 8 60 irreducible representations dim 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 4 4 4 4 4 8 8 8 type + + + + + + + + + + + + + + + image C1 C2 C2 C2 C2 C2 C4 C4 C4 C4 C8 S3 D6 D6 C4×S3 C4×S3 S3×C8 F5 C2×F5 C2×F5 C2×F5 D5⋊C8 S3×F5 C2×S3×F5 S3×D5⋊C8 kernel S3×D5⋊C8 S3×C5⋊C8 D15⋊C8 C3×D5⋊C8 C60.C4 C4×S3×D5 D5×Dic3 S3×C20 C4×D15 C2×S3×D5 S3×D5 D5⋊C8 C5⋊C8 C4×D5 C20 D10 D5 C4×S3 Dic3 C12 D6 S3 C4 C2 C1 # reps 1 2 2 1 1 1 2 2 2 2 16 1 2 1 2 2 8 1 1 1 1 4 1 1 2 Matrix representation of S3×D5⋊C8 in GL6(𝔽241) 240 240 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , 1 0 0 0 0 0 240 240 0 0 0 0 0 0 240 0 0 0 0 0 0 240 0 0 0 0 0 0 240 0 0 0 0 0 0 240 , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 240 240 240 240 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 , 240 0 0 0 0 0 0 240 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 240 240 240 240 , 177 0 0 0 0 0 0 177 0 0 0 0 0 0 0 171 2 171 0 0 72 0 70 70 0 0 70 70 0 72 0 0 171 2 171 0 G:=sub<GL(6,GF(241))| [240,1,0,0,0,0,240,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[1,240,0,0,0,0,0,240,0,0,0,0,0,0,240,0,0,0,0,0,0,240,0,0,0,0,0,0,240,0,0,0,0,0,0,240],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,240,1,0,0,0,0,240,0,1,0,0,0,240,0,0,1,0,0,240,0,0,0],[240,0,0,0,0,0,0,240,0,0,0,0,0,0,0,0,1,240,0,0,0,1,0,240,0,0,1,0,0,240,0,0,0,0,0,240],[177,0,0,0,0,0,0,177,0,0,0,0,0,0,0,72,70,171,0,0,171,0,70,2,0,0,2,70,0,171,0,0,171,70,72,0] >; S3×D5⋊C8 in GAP, Magma, Sage, TeX S_3\times D_5\rtimes C_8 % in TeX G:=Group("S3xD5:C8"); // GroupNames label G:=SmallGroup(480,986); // by ID G=gap.SmallGroup(480,986); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-3,-5,56,100,80,1356,9414,2379]); // Polycyclic G:=Group<a,b,c,d,e|a^3=b^2=c^5=d^2=e^8=1,b*a*b=a^-1,a*c=c*a,a*d=d*a,a*e=e*a,b*c=c*b,b*d=d*b,b*e=e*b,d*c*d=c^-1,e*c*e^-1=c^3,e*d*e^-1=c^2*d>; // generators/relations ׿ × 𝔽
{}
# lt¶ ## Description¶ Return true if A is strictly less than B. bool ## Domain¶ This is a scalar function (calculates a single output value for a single input row.) ## Usage¶ lt( a, b ) Argument Type Required Multiple a any Required Only one b any Required Only one
{}
# 4.2 Lab 3: prelab (part 2) Page 1 / 1 You will design a fourth-order notch filter and investigate the effects of filter-coefficient quantization. You will compare the response of the filter having unquantized coefficients with that of a filter having coefficients quantized as a single, fourth-order stage and with that of a filter having coefficients quantized as a cascade of two, second-order stages. ## Filter-coefficient quantization One important issue that must be considered when IIR filters are implemented on a fixed-point processor is that the filtercoefficients that are actually used are quantized from the "exact" (high-precision floating point) values computed byMATLAB. Although quantization was not a concern when we worked with FIR filters, it can cause significant deviationsfrom the expected response of an IIR filter. By default, MATLAB uses 64-bit floating point numbers in all of its computation. These floating point numbers can typicallyrepresent 15-16 digits of precision, far more than the DSP can represent internally. For this reason, when creating filtersin MATLAB, we can generally regard the precision as "infinite," because it is high enough for any reasonable task. Not all IIR filters are necessarily "reasonable"! The DSP, on the other hand, operates using 16-bit fixed-point numbers in the range of -1.0 to $1.0-2^{-15}$ . This gives the DSP only 4-5 digits of precision and only if the input is properly scaled to occupy the fullrange from -1 to 1. For this section exercise, you will examine how this difference in precision affects a notch filter generated using the butter command: [B,A] = butter(2,[0.07 0.10],'stop') . ## Quantizing coefficients in matlab It is not difficult to use MATLAB to quantize the filter coefficients to the 16-bit precision used on the DSP. To do this, first take each vector of filtercoefficients (that is, the $A$ and $B$ vectors) and divide by the smallest power of two such that the resulting absolute value of the largest filtercoefficient is less than or equal to one. This is an easy but fairly reasonable approximation of how numbers outsidethe range of -1 to 1 are actually handled on the DSP. Next, quantize the resulting vectors to 16 bits of precision by first multiplying them by $2^{15}=32768$ , rounding to the nearest integer (use round ), and then dividing the resulting vectors by 32768. Then multiply the resulting numbers, which will bein the range of -1 to 1, back by the power of two that you divided out. ## Effects of quantization Explore the effects of quantization by quantizing the filter coefficients for the notch filter. Use the freqz command to compare the response of the unquantized filter with two quantized versions: first,quantize the entire fourth-order filter at once, and second, quantize the second-order ("bi-quad") sections separatelyand recombine the resulting quantized sections using the conv function. Compare the response of the unquantized filter and the two quantizedversions. Which one is "better?" Why do we always implement IIR filters using second-order sections instead ofimplementing fourth (or higher) order filters directly? Be sure to create graphs showing the difference between the filter responses of the unquantized notch filter, the notchfilter quantized as a single fourth-order section, and the notch filter quantized as two second-order sections. Savethe MATLAB code you use to generate these graphs, and be prepared to reproduce and explain the graphs as part of yourquiz. Make sure that in your comparisons, you rescale the resulting filters to ensure that the response is unity (one)at frequencies far outside the notch. I only see partial conversation and what's the question here! what about nanotechnology for water purification please someone correct me if I'm wrong but I think one can use nanoparticles, specially silver nanoparticles for water treatment. Damian what is the stm is there industrial application of fullrenes. What is the method to prepare fullrene on large scale.? Rafiq industrial application...? mmm I think on the medical side as drug carrier, but you should go deeper on your research, I may be wrong Damian How we are making nano material? what is a peer What is meant by 'nano scale'? What is STMs full form? LITNING scanning tunneling microscope Sahil how nano science is used for hydrophobicity Santosh Do u think that Graphene and Fullrene fiber can be used to make Air Plane body structure the lightest and strongest. Rafiq Rafiq what is differents between GO and RGO? Mahi what is simplest way to understand the applications of nano robots used to detect the cancer affected cell of human body.? How this robot is carried to required site of body cell.? what will be the carrier material and how can be detected that correct delivery of drug is done Rafiq Rafiq what is Nano technology ? write examples of Nano molecule? Bob The nanotechnology is as new science, to scale nanometric brayan nanotechnology is the study, desing, synthesis, manipulation and application of materials and functional systems through control of matter at nanoscale Damian Is there any normative that regulates the use of silver nanoparticles? what king of growth are you checking .? Renato What fields keep nano created devices from performing or assimulating ? Magnetic fields ? Are do they assimilate ? why we need to study biomolecules, molecular biology in nanotechnology? ? Kyle yes I'm doing my masters in nanotechnology, we are being studying all these domains as well.. why? what school? Kyle biomolecules are e building blocks of every organics and inorganic materials. Joe anyone know any internet site where one can find nanotechnology papers? research.net kanaga sciencedirect big data base Ernesto Introduction about quantum dots in nanotechnology what does nano mean? nano basically means 10^(-9). nanometer is a unit to measure length. Bharti do you think it's worthwhile in the long term to study the effects and possibilities of nanotechnology on viral treatment? absolutely yes Daniel how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Abigail for teaching engĺish at school how nano technology help us Anassong How can I make nanorobot? Lily Got questions? Join the online conversation and get instant answers!
{}
## Tuesday, July 26, 2011 ### typedef[ine] The typedef keyword introduces a new type. The #define directive provides a textual substitution mechanism. When does this matter? Consider the following source: #include <stdio.h> #include <string.h> #include <stdlib.h> #include "str_type.h" size_t length (CharPtr str) { return strlen(str); } void message () { CharPtr str = malloc (255); if (! str) { return; } strcpy (str, "The Message"); printf("%s\n", str); free (str); } int main () { CharPtr str = "Some Static String"; CharPtr str1, str2, str3; printf ("Length: %d\n", length (str)); message (); printf ("sizeof(str1): %d\n", sizeof (str1)); printf ("sizeof(str2): %d\n", sizeof (str2)); return 0; } If str_type.h contains typedef char * CharPtr; Then the output of running that program is Length: 18 The Message sizeof(str1): 8 sizeof(str2): 8 However, if that file contains #define CharPtr char * Length: 18 The Message sizeof(str1): 8 sizeof(str2): 1 Notice that since the CharPtr is no longer a type you get char * str1, str2, str3; after the processing phase of translation and the asterisk only binds to the first variable leaving the remaining two as plain char variables. Use #define directives only to provide textual substitution prior to translation; it does not interact with - or compliment - the type system. ## Friday, July 15, 2011 ### Double Standard In almost all instances it is a bad idea to place multiple distinct scales on a single plot. Not only does it require more of the reader in terms of deciphering the message, it also lessens the impact of the chart itself. Instead of a reader being able to associate the vertical data with an immediate estimated value in their mind they now have a two step process to determine a value. First, they must choose which scale applies - then they must consult the appropriate scale to derive a value. This is all a bad thing. I also feel this is a common way to try and hide information in plain sight. Consider the following graph, what points define where the two lines intersect? (Here's a hint - they don't) This layout lends itself to all sorts of slight of hand when talking to or describing data. Usually, I would claim that any graph that did this could be better provided as either a multiplot or two separate graphs. However, the other day I saw an instance where it made perfect sense: temperature. Plotting temperature with two separate scales is actually a representation of two separate mathematical functions that represent the same property; in effect the graph acts as a lookup table for the user for translating from one function to another. As an example, consider the following: A glorified lookup table to be sure - but entirely functional. Moving forward, I've modified my view to something more along the lines of: "Dual scales are useful when they both describe a single property in two distinct ways." Any measurement that can be represented in a variety of ways falls into this category: temperature (Fahrenheit v. Celsius), time (24-hour v. 12-hour), distance (miles v. kilometers), and so on. ## Tuesday, July 5, 2011 ### Progress [EDIT] This code is now available on github: libpbar cURL ships with one. wget uses it's own. OpenSSH (scp) even comes with it's own version. Indeed, I've written several variations myself over the years. Sooner or later, if you are doing something that processes large amounts of data or takes a long time to complete and you are working in a command-line environment you will write your own, too. A progress bar. I did a quick search for existing progress bar libraries but came up with little outside of Ruby and Python wrappers around what the packages I mention above are already doing [1]. For mostly selfish reasons, I've decided to write a library that fixes my redundant behavior moving forward. Once I am able to provide a proper build system I will put this code up on github - my unfamiliarity with autotools may delay this a bit. What this library aims to provide is a consistent interface for programatically displaying a progress bar outside of a GUI environment. It does not try to provide a package similar to Pipe Viewer (pv) which is a complete, standalone program for monitoring processes activity. Instead, to keep things simple, libpbar will initially provide a class with the following public interface: class ProgressBar { public: ProgressBar (::size_t cap); void update (::size_t n = 1); }; With three implementations provided with the library: TextProgressBar, ColorProgressBar, GraphicalProgressBar. These provide basic text functionality, color text, and a GTK+-based object respectively. Using any of these facilities is as simple as: int flags = TextProgressBar::SHOW_VALUE | TextProgressBar::SHOW_COUNT; TextProgressBar pb(100,flags); They each do as you might expect: TextProgressBar: ColorProgressBar: GraphicalProgressBar: The nice thing about the GUI version is that the hosting program doesn't need to be a GUI application. For this version of the progress bar a separate thread is used to spawn the graphical interface and as much as possible is done to shed the window treatments to expose just the progress bar itself. I've got to expand on this by allowing some way to encode message text with format strings and allowing user-defined color modifications. The basics are there, however, and I should now be free from writing yet another progress bar implementation. [1] This is a solved problem and certainly my fruitless search does not imply any lack of existing tools
{}
Simplifying a trig expression • May 29th 2008, 06:07 PM Kim Nu Simplifying a trig expression [(1 / (cos^2 (x)) - x]*sin(x) Can someone help me simplify this? Thanks, Kim • May 29th 2008, 06:49 PM TKHunny Probably not. Why do you think it possible? • May 29th 2008, 08:06 PM angel.white Quote: Originally Posted by Kim Nu [(1 / (cos^2 (x)) - x]*sin(x) Can someone help me simplify this? Thanks, Kim You've done a good job trying to line the brackets up, but they are unfortunately not correct. The "(" to the left of the "1" is never closed. • May 29th 2008, 09:07 PM Mathstud28 Quote: Originally Posted by Kim Nu [(1 / (cos^2 (x)) - x]*sin(x) Can someone help me simplify this? Thanks, Kim I cannot see any simplification other than $\bigg[\frac{1}{\cos^2(x)}-x\bigg]\cdot\sin(x)=\frac{\sin(x)}{\cos^2(x)}-x\cdot\sin(x)=\tan(x)\cdot\sec(x)-x\cdot\sin(x)$
{}
# python class (6): python 2 and python 3 class definition with super() ## 1. Why need super() In a single inheritance case (when you subclass one class only), your new class inherits methods of the base class. This includes __init__. So if you don't define it in your class, you will get the one from the base. Things start being complicated if you introduce multiple inheritance (subclassing more than one class at a time). This is because if more than one base class has __init__, your class will inherit the first one only. In such cases, you should really use super if you can, I'll explain why. But not always you can. The problem is that all your base classes must also use it (and their base classes as well -- the whole tree). If that is the case, then this will also work correctly (in Python 3 but you could rework it into Python 2 -- it also has super): ## python3 class A: def __init__(self): print('A') super().__init__() class B: def __init__(self): print('B') super().__init__() class C(A, B): pass C() #prints: #A #B ## python2 class A(object): def __init__(self): print('A') super(A, self).__init__() class B(object): def __init__(self): print('B') super(B, self).__init__() class C(A, B): pass C() #prints: #A #B Notice how both base classes use super even though they don't have their own base classes. What super does is: it calls the method from the next class in MRO (method resolution order). The MRO for C is: (C, A, B, object). You can print C.__mro__ to see it. So, C inherits __init__ from A and super in A.__init__ calls B.__init__ (B follows A in MRO). So by doing nothing in C, you end up calling both, which is what you want. Now if you were not using super, you would end up inheriting A.__init__ (as before) but this time there's nothing that would call B.__init__ for you. class A: def __init__(self): print('A') class B: def __init__(self): print('B') class C(A, B): pass C() # output print: A To fix that you have to define C.__init__: class C(A, B): def __init__(self): A.__init__(self) B.__init__(self) The problem with that is that in more complicated MI trees, __init__ methods of some classes may end up being called more than once whereas super/MRO guarantee that they're called just once. ## 2. How to use super() super() (without arguments) was introduced in Python 3: super() -> same as super(__class__, self) Python 2 equivalent for new-style classes: super(CurrentClass, self) for old-style classes you can always use: class Classname(OldStyleParent): def __init__(self, *args, **kwargs): OldStyleParent.__init__(self, *args, **kwargs)
{}
1. 125. C. 120. Example: The code that opens a certain lock could, for instance, be 333. You can’t be first and . Total […] ... All permutations of the string with repetition of ABC are: AAA AAB AAC ABA ABB ABC ACA ACB ACC BAA BAB BAC BBA BBB BBC BCA BCB BCC CAA CAB CAC CBA CBB CBC CCA CCB CCC. Repetition of characters is allowed. Technically, there's no such thing as a permutation with repetition. you can have a lock that opens with 1221. In other words: A Permutation is an ordered Combination. Directions: The questions in this section consists of the repetition of the words or letters or numbers or alphabets. When the number of object is “n,” and we have “r” to be the selection of object, then; Choosing an object can be in n different ways (each time). $\endgroup$ – guest11 Nov 7 '15 at 23:50 $\begingroup$ The key phrase with replacement often means repetition is allowed, and without replacement that it isn't. Permutations with Repetition. 216. The number of r-combinations with repetition allowed (multisets of size r) that can be selected from a set of n elements is r + n 1 r : This equals the number of ways r objects can be selected from n categories of objects with repetition allowed. We have moved all content for this concept to for better organization. Proof. Permutations: There are basically two types of permutation: Repetition is Allowed: such as the lock above. Each of the different arrangements which can be made by taking some or all of a number of things is called a permutation. First position can have N choices The second position can have ( N-1 ) choices. We can actually answer this with just the product rule: $$10^5$$. The number of permutations of ‘n’ things taken ‘r’ at a time is denoted by n P r It is defined as, n P r D. 320. In this post, we will see how to find all lexicographic permutations of a string where repetition of characters is allowed. For the given input string, print all the possible permutations. In permutation without repetition, you select R objects at a time from N distinct objects. You can't be first andsecond. 26^3=17576 2. If repetition is allowed then how many different three digits numbers can be formed using the digits from 1 to 5? Type 1: How to Solve Quickly Permutation and Combination Different ways to arrange (with repetition) Question 1.How many 3 letter words with or without meaning can be formed out of the letters of the word MONDAY when repetition of words is allowed? In some cases, repetition of the same element is allowed in the permutation. With permutations, every little detail matters. Permutations. I assume you want all strings of length n with letters from c. You can do it this way: to generate all strings of length N with letters from C -generate all strings of length N with letters from C that start with the empty string. Wrapping this function in a generator allows us terminate a repeated generation on some condition, or explore … Another definition of permutation is the number of such arrangements that are possible. For example, locks allow you to pick the same number for more than one position, e.g. The printing of permutation should be done in alphabetical order (lexicographically sorted order). Permutation when the repetition of the words are allowed. No Repetition: for example, the first three people in a running race. There is a subset of permutations that takes into account that there are double objects or repetitions in a permutation problem. These are the easiest to calculate. Male or Female ? Permutations with repetition. Thus, in each of the four places, we have 26 choices of letter: $26^{4}$ possibilities. Compare the permutations of the letters A,B,C with those of the same number of letters, 3, but with one repeated letter $$\rightarrow$$ A, A, B. Permutations with repetition by treating the elements as an ordered set, and writing a function from a zero-based index to the nth permutation. In general, repetitions are taken care of by dividing the permutation by the factorial of the number of objects that are identical. Ordered arrangements of n elements of a set S, where repetition is allowed, are called n-tuples. Male Female Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student Please update your bookmarks accordingly. Now you have R positions to arrange N objects. Print these permutations in Print all distinct permutations of a given string with duplicates. When the order doesmatter it is a Permutation. Here we are selecting items (digits) where repetition is allowed: we can select 4 multiple times if we want. For example, if you have 10 digits to choose from for a combination lock with 6 numbers to enter, and you're allowed to repeat all the digits, you're looking to find the number of permutations with repetition. For example, what order could 16 pool balls be in? A permutation is an arrangement of objects, without repetition, and order being important. 7.1.5 When repetition of objects is allowed The number of permutations of n things taken all at a time, when repetion of objects is allowed is nn. Java Program to print distinct permutations of a string; Find a Fixed Point in an array with duplicates allowed in C++; Print first n distinct permutations of string using itertools in Python; Print all permutations with repetition of characters in C++ Let's summarize with the general rule: when order matters and repetition is allowed, if n is the number of things to choose from (balloons, digits etc), and you choose r of them (5 balloons for the party, 4 digits for the password, etc. Most commonly, the restriction is that only a small number of objects are to be considered, meaning that not all the objects need to be ordered. n different things taking r at a time without repetition - definition The number of permutations of n different things, taking r at a time without repetition is denoted by n P r . Solution: 6 * 6 * 6 = 216. Permutation with repetition. Permutation with Repetition. You can’t be first and second. That was an $$r$$-permutation of $$n$$ items with repetition allowed. There are two types of permutations: Repetition is Allowed: For the number lock example provided above, it could be “2-2-2”. Permutations with repetition. A permutation with repetition of n chosen elements is also known as an "n-tuple". Thus, the permutation of objects when repetition is allowed will be equal to, Permutation formulas. Permutations with Repetition n r For this case, n and k happens to be the same. OR Repetition of characters is allowed. No Repetition: for example the first three people in a running race. B. (Repetition allowed, order matters) Ex: how many 3 litter words can be created, if Repetition is allowed? It could be "333". How many different ways are there to arrange your first three classes if they are math, science, and language arts? 1. We are not assuming any of the things you mention, and every possible combination of four letters is counted in the figure $26^{4}$. Problem Definition: R-permutation of a set of N distinct objects with repetition allowed. For example, consider string ABC. The number of combination should be n^k (n=number of elements, k=combination length). Or you can have a PIN code that has the same number in more than one position. Permutations where repetition isn’t allowed Permutation with Repetition. The number of permutations of n objects, taken r at a time, when repetition of objects is allowed, is nr. This is a permutation with repetition. Permutation when repetition is allowed. Permutation without Repetition: This method is used when we are asked to reduce 1 from the previous term for each time. Permutations without repetition A permutation is an arrangement, or listing, of objects in which the order is important. 1. It means the order in which elements are arranged is very important. Noel asks: Is there a way where i can predict all possible outcomes in excel in the below example. It means "any of the 26 letters can go in each of the four places." Like how do I distinguish the question whether the repetition is allowed on that question. After choosing, say, number "14" we can't choose it again. When a permutation can repeat, we just need to raise n to the power of however many objects from n we are choosing, so . Permutation can be done in two ways, Permutation with repetition: This method is used when we are asked to make different choices each time and with different objects. Permutations with Repetition. Covers permutations with repetitions. Permutation with repetition occurs when a set has r different objects, and there are n choices every time. They have sometimes been referred to as permutations with repetition, although they are not permutations in general. A permutation is an ordering of a set of objects. This blog post describes how to create permutations, repetition is NOT allowed. We should print them in lexicographic order. Print all permutations with repetition of characters, Given a string of length n, print all permutation of the given string. Like combinations, there are two types of permutations: permutations with repetition, and permutations without repetition. Permutations are items arranged in a given order meaning […] List permutations with repetition and how many to choose from. It has following lexicographic permutations with repetition of characters - AAA, AAB, AAC, ABA, ABB, ABC, … To improve this 'Permutation with repetition Calculator', please fill in questionnaire. Print k different sorted permutations of a given array in C Program. Repeating of characters of the string is allowed. Options: A. The formula is written: n r. where, n is number of things to choose from; r is number of things we choose of n; repetition is allowed; order matters; Permutation without Repetition Permutation without Repetition: for example the first three people in a running race. ), the number of permutations will equal P = n r. Permutations Where Repetition Isn't Allowed When additional restrictions are imposed, the situation is transformed into a problem about permutations with restrictions. Permutations without Repetition In this case, we have to reduce the number of available choices each time. All the different arrangements of the letters A, B, C. All the different arrangements of the letters A, A, B "With repetition" means that repetition is allowed. There are basically two types of permutation: Repetition is Allowed: It could be “333”. Ways to sum to N using array elements with repetition allowed; Python program to get all subsets of given size of a set; Count Derangements (Permutation such that no element appears in its original position) Iterative approach to print all permutations of an Array; Distinct permutations … A permutation is an arrangement in a definite order of a number of objects taken some or all at a time. Numbers can be created, if repetition is allowed will be equal to, '' repetition. N'T choose it again transformed into a problem about permutations with repetition by treating the elements as an n-tuple... 3 litter words can be made by taking some or all at a time from n distinct objects repetition! 3 litter words can be made by taking some or all of a given array in C.... Are double objects or repetitions in a permutation with repetition allowed which elements are arranged is very important more! An n-tuple '' chosen elements is also known as an n-tuple '' all of set! Position, e.g repetition and how many 3 litter words can be formed using the digits from 1 5. $26^ { 4 }$ possibilities repetition by treating the elements as an ordered,. Alphabetical order ( lexicographically sorted order ) ordered set, and writing a function from a index! Language arts the printing of permutation: repetition is allowed: we select! K=Combination length ) in excel in the below example of characters is allowed to as permutations with and! The factorial of the words are allowed this with just the product rule: (. A number of such arrangements that are possible to pick the same the example. For instance, be 333 example: the code that has the same number for more than one position should! In print all the possible permutations find all lexicographic permutations of a given array C! N-1 ) choices arrangements of n distinct objects with repetition '' means repetition. Actually answer this with just the product rule: \ ( n\ ) items with repetition and many! Imposed, the first three people in a permutation is an arrangement of objects when repetition n. It could be “ 333 ” permutations are items arranged in a given order meaning …! You have r positions to arrange your first three people in a race! Array in C Program than one position, e.g can select 4 multiple if. The given input string, print all the possible permutations first three people in a array! Transformed into a problem about permutations with repetition of the same number for more than one position,.... Ordered arrangements of n elements of a given order meaning [ … permutation with repetition allowed print k different permutations. In C Program items arranged in a given array in C Program is an arrangement in a given array C. With duplicates set of objects or you can have a PIN code that has the number... R-Permutation of a given order meaning [ … ] List permutations with restrictions ). Here we are selecting items ( digits ) where repetition of the same number for more than one position e.g! Is an arrangement, or listing, of objects is allowed then how many different three numbers. Numbers or alphabets taken some or all of a set of n chosen elements is also known an! R different objects, taken r at a time, when repetition allowed. If they are NOT permutations in general $possibilities lock above although they are math science. With repetition by treating the elements as an ordered combination, are called n-tuples after choosing,,... Imposed, the permutation with repetition allowed arrangement in a definite order of a set r... With duplicates, repetitions are taken care of by dividing the permutation by the factorial of number..., we have moved all content for this case, we have 26 choices of:! C Program to, '' with repetition '' means that repetition is allowed n chosen is... Post describes how to find all lexicographic permutations of a number of things called... And writing a function from a zero-based index to the nth permutation 6 * 6 * 6 6... Pool balls be in or numbers or alphabets permutation without repetition three people a! If we want if repetition is allowed: it could be “ 333 ” to ''... 26 choices of letter:$ 26^ { 4 } $possibilities ) -permutation of (... Print all distinct permutations of a given string with duplicates, for instance, be 333 multiple times if want. Is there a way where i can predict all possible outcomes in excel in permutation. An \ ( n\ ) items with repetition allowed the second position can have ( ). K=Combination length ):$ 26^ { 4 } $possibilities number in more than one position e.g. Be equal to, '' with repetition, and language arts print all the possible permutations created... ( n\ ) items with repetition allowed are arranged is very important that there are choices. N^K ( n=number of elements, k=combination length ) number for more than one position arrangement or... Be “ 333 ” example the first three people in a permutation go in of... This post, we have to reduce 1 from the previous term for each.. Ordered combination are items arranged in a permutation is an arrangement of,. Are there to arrange n objects there a way where i can predict all possible outcomes in excel in below! “ 333 ” by treating the elements as an ordered combination of by dividing the permutation objects! Of the same number in more than one position, e.g sometimes been referred to as permutations with repetition means., repetitions are taken care of by dividing the permutation formed using digits! N and k happens to be the same, are called n-tuples items arranged a... List permutations with repetition occurs when a set S, where repetition of characters allowed... Arrangement in a running race n-tuple '' in which the order in which the order is important code! This method is used when we are permutation with repetition allowed to reduce 1 from the term! In other words: a permutation is the number of combination should be (. R ( repetition allowed, are called n-tuples two types of permutation should be done in alphabetical order ( sorted. An arrangement of objects, and order being important or all at time. Litter words can be made by taking some or all of a of. Multiple times if permutation with repetition allowed want r objects at a time, when repetition is allowed, of objects is.... Permutations without repetition, and writing a function from a zero-based index to nth... Number for more than one position equal to, '' with repetition of four... Your first three classes if they are math, science, and there are basically two types of that... It again definition: R-permutation of a string where repetition of objects when repetition objects! Directions: the code that has the same the situation is transformed into a problem about permutations repetition., order matters ) Ex: how many different three digits numbers be. Be 333 in a running race when we are selecting items ( digits ) where is... Ca n't choose it again r different objects, and writing a function from a zero-based index the... Of characters is allowed will be equal to, '' with repetition '' means that repetition allowed! We have 26 choices of letter:$ 26^ { 4 } \$ possibilities, or listing, objects... And writing a function from a zero-based index to the nth permutation below. Then how many different ways are there to arrange your first three classes if they are permutations. Blog post describes how to find all lexicographic permutations of a number of permutations that takes into account there... Arrange your first three people in a running race 1 to 5 the position. Ways are there to arrange your first three classes if they are math, science, and order being.... With restrictions r objects at a time, when repetition is allowed n elements a! To for better organization to arrange n objects to, '' with repetition and how many ways! All lexicographic permutations of n elements of a set of objects when repetition is.... Nth permutation n choices the second position can have a lock that opens a certain lock could, instance... Case, n and k happens to be the same element is allowed digits ) where of... Called n-tuples basically two types of permutation is an arrangement of objects in which elements are is..., if repetition is allowed: it could be “ 333 ”, e.g * 6 =.! Is NOT allowed if repetition is allowed: we can select 4 multiple times if we want such. ( lexicographically sorted order ) this post, we will see how create! Have sometimes been referred to as permutations with repetition allowed if they are math, science and. Set, and order being important way where i can predict all possible in... Known as an n-tuple '' to the nth permutation be n^k ( n=number of elements, k=combination length.! About permutations with repetition, you select r objects at a time from n distinct.... Repetition is NOT allowed equal to, '' with repetition by treating the elements as ordered! R\ ) -permutation of \ ( n\ ) items with repetition, and there basically... Different ways are there to arrange your first three people in a definite of... Ordered combination when additional restrictions are imposed, the situation is transformed into a problem about with! Of n objects running race three classes if they are math, science and. 26 letters can go in each of the words are allowed N-1 ) choices of letter: 26^... Times if we want set, and permutations without repetition, and permutation with repetition allowed are two of.
{}
# ABCD is a trapezium having AB || DC. Prove that O, Question: $A B C D$ is a trapezium having $A B \| D C$. Prove that $O$, the point of intersection of diagonals, divides the two diagonals in the same ratio. Also prove that $\frac{\operatorname{ar}(\Delta O C D)}{\operatorname{ar}(\Delta O A B)}=\frac{1}{9}$, if $A B=3 C D$. Solution: We are given ABCD is a trapezium with AB||DC $\angle A O B=\angle C O D$ $\angle A B O=\angle O D C$(alternative angle) $\angle B A O=\angle D C A$(alternative angle) Therefore, $\triangle O D C \sim \triangle O B A$ $\Rightarrow \frac{A O}{O C}=\frac{B O}{D O}=\frac{A B}{C D}$ $\Rightarrow \frac{A O}{O C}=\frac{B O}{D O}$ Hence we have proved that O, the point of intersection of diagonals, divides the two diagonals in the same ratio. We are given $A B=3 C D$ and we have to prove that $\frac{\operatorname{ar}(\Delta O C D)}{a r(\Delta O A B)}=\frac{1}{9}$ We already have proved that AOB and COD are similar triangles So $\frac{a r(\Delta O C D)}{a r(\Delta O A B)}=\frac{C D^{2}}{A B^{2}}$ $\frac{a r(\Delta O C D)}{a r(\Delta O A B)}=\frac{C D^{2}}{(3 C D)^{2}}$ $\frac{\operatorname{ar}(\Delta O C D)}{\operatorname{ar}(\Delta O A B)}=\frac{1}{9}$ Hence, Prove that $\frac{\operatorname{ar}(\Delta O C D)}{\operatorname{ar}(\Delta O A B)}=\frac{1}{9}$
{}
# Horizontal Bar Chart: A stroll through the languages of data¶ #### Some preliminaries¶ If you are wondering why in the world this webpage looks the way it does, it might help you to review Anaconda, Jupyter scripts and a basic Python example. You can do so by reviewing the post(s) below. ### Using Matplotlib¶ Trying to figure out which predictive modeling programming language is the "best" is a bit like the Greatest Of All Time (GOAT) debate between who is better, Federer or Nadal?* Interesting but ultimately, useless. People use or prefer different languages for a myriad of reasons. We won't get into any of that, but will use some data around the popularity of various languages in order to showcase a few bar-charting capabilities in Python! The data we use is (very loosely) based on the following posts: http://r4stats.com/articles/popularity/ | https://www.kdnuggets.com/2018/05/poll-tools-analytics-data-science-machine-learning-results.html We start our foray into horizontal bar charts using the popular Matplotlib library (https://matplotlib.org/). This was inspired from MATLAB, and provides a lot of control over nearly every aspect of the chart (at the cost of lots of coding!). *silly question: Nadal of course! :) In [21]: import numpy as np import seaborn as sns sns.set(style="whitegrid") #makes the graph look a little nicer import matplotlib.pyplot as plt #we load the library that contains the plotting capabilities from operator import itemgetter #we use this in the sorting procecure, below D = [('SQL',10),('Python',11),('R',8),('SAS',5.5),('Julia',0.3),('Excel',5)] #enter data for language & popularity Dsort = sorted(D, key=itemgetter(1), reverse=False) #sort the list in order of popularity lang = [x[0] for x in Dsort] #create a list from the first dimension of data use = [x[1] for x in Dsort] #create a list from the second dimension of data plt.barh(lang, use, align='center', alpha=0.7, color='r', label='2018') #a horizontal bar chart (use .bar instead of .barh for vertical) plt.yticks(lang) plt.xlabel('Usage') plt.title('Guesstimating Programming Language Usage') plt.legend() #puts the year, e.g. 2018, on the plot plt.show() What if we want to compare two series in a bar chart, perhaps a comparison of 2017 popularity to 2018? In [22]: D = [('SQL',10,12),('Python',11,9),('R',8,7),('SAS',5.5,4.5),('Julia',0.3,0.1),('Excel',5,3)] #enter data for language & usage, 3rd column is for 2017 usage Dsort = sorted(D, key=itemgetter(1), reverse=False) #sort the list in order of usage lang = [x[0] for x in Dsort] #create a list from the first dimension of data use = [x[1] for x in Dsort] #create a list from the second dimension of data (2018 popularity) use2 = [x[2] for x in Dsort] #create a list from the second dimension of data (2017 popularity) ind = np.arange(len(lang)) width=0.3 ax = plt.subplot(111) ax.barh(ind, use, width, align='center', alpha=0.8, color='r', label='2018') #a horizontal bar chart (use .bar instead of .barh for vertical) ax.barh(ind - width, use2, width, align='center', alpha=0.8, color='b', label='2017') #a horizontal bar chart (use .bar instead of .barh for vertical) ax.set(yticks=ind - width/2, yticklabels=lang, ylim=[2*width - 1, len(lang)]) plt.xlabel('Usage') plt.title('Guesstimating Programming Language Usage') plt.legend() plt.show() #### Data Labels¶ What if you want the numbers to show up next to the bars? In [25]: ax = plt.subplot(111) ax.barh(ind, use, width, align='center', alpha=0.7, color='r', label='2018') #a horizontal bar chart (use .bar instead of .barh for vertical) ax.barh(ind - width, use2, width, align='center', alpha=0.7, color='b', label='2017') #a horizontal bar chart (use .bar instead of .barh for vertical) ax.set(yticks=ind - width/2, yticklabels=lang, ylim=[2*width - 1, len(lang)]) for i, v in enumerate(use): ax.text(v+0.15,i-0.05, str(v), color='red', fontsize=9) #the 0.15 and 0.05 were set after trial & error (based on how nice things look) for i, v in enumerate(use2): ax.text(v+0.15,i-0.4, str(v), color='blue', fontsize=9) #the 0.4 was set after trial & error (based on how nicely it aligns, edit it and rerun to see the difference) plt.xlabel('Usage') plt.title('Guesstimating Programming Language Usage') plt.legend() plt.show() Now that was a heck of a lot of code to write to put together a bar chart! You can do it much quicker in Excel. However, these techniques will be helpful if we want to limit switching between data exploration in Python and Excel. I will update this post in the future with techniques that shorten the code and make the charts look prettier (e.g. using seaborn library). Be back soon! In [ ]:
{}
# Using PyOpenGL, can I rotate raw data rather than the current matrix? I'm trying to create a sort of PyOpenGL renderer for the Bullet physics engine, and at the moment I'm rotating my basic cubes with math in Python. However, if I do the math in Python that's going to take more time than doing the math with OpenGL. OpenGL has the ability to rotate a matrix using glRotatef(), but it only rotates the "current matrix". I'm not really interested in convoluting my render loop with an individual check for each object's rotation and position if I don't have to. So my question is, is there a way I can, for example, load a matrix of points into OpenGL, rotate this matrix and then read it back and give it to my render loop? Or should I instead look for an alternative 3D math library? (Or for that matter, is running the code in python really all that laggy?) • It seems like you are doing things the old, fixed-function way with calls like glRotatef(). What you probably are looking for are Vertex Buffer Objects (VBOs). These would allow you to pass in an array of transformation (model-view) matrices. You could then use a shader program to take that attribute data and calculate the transforms on the GPU instead. – CodeSurgeon Jul 29 '18 at 3:26 • Actually, I'm not even using glRotatef(), I'm using math in Python that I found on the internet to rotate my cubes. Currently, I'm moving towards VBOs in order to speed up rendering my game terrain, is there a way to rotate parts of a VBO? – C1ff Jul 30 '18 at 16:20 • There are several approaches to rotating a part of a VBO. If you have the chance, you might want to take a look at this video for some ideas (it uses javascript and webgl, but it could give you some ideas to start with). I will try to write up a more thorough answer later today in the meantime. Also, what math library are you using? If you are doing this in python, I can vouch that it will be slow. I have written some math modules in cython that I wouldn't mind cleaning up and sharing, since numpy is not very convenient to use for 3d math. – CodeSurgeon Jul 31 '18 at 19:35 • I've actually been using just the math library and making matrices out of lists. – C1ff Jul 31 '18 at 20:13 • Wrote up an answer as a stream of consciousness. Please let me know if there are any parts that need to be clarified or are worded in a confusing manner! – CodeSurgeon Jul 31 '18 at 23:32 First of all, it is good that you appear to be on the right track with using VBOs! In general, it is a good idea when working with the GPU to pass as much data to the GPU at once as is possible. This means minimizing the number of draw calls (i.e. send a whole bunch of cubes in a single draw call rather than drawing one cube at a time with glDrawArrays or worse, glVertex). • There is first the issue of Python's memory model. In python, everything, even primitives such as integers, are all bulky objects built from a basic PyObject struct rather than direct, native machine types. This can make accessing the data of the underlying objects slow as a result, with even apparently simple operations such as adding two numbers invoking lots of operations behind the scenes in the python interpreter. For fast math operations, this is not conducive to good performance, and there is generally little to gain from the safety features (such as avoiding overflow) that are offered from this additional layer.
{}
# Identify isomorphism type for each proper subgroup of (Z/32Z)* #### ianchenmu ##### Member The question is to identify isomorphism type for each proper subgroup of $(\mathbb{Z}/32\mathbb{Z})^{\times }$. (what's the "isomorphism type" means? Does the question mean we need to list all the ismorphism between of each subgroup and the respectively another group that is isomorphic to the subgroup? If so, what are they then? ) #### jakncoke ##### Active member All i found regarding type was if G is a finite group direct products of cyclic groups of order ${p_1}^{r_1},...,{p_k}^{r_k}$ with $p_i \leq p_k$ for $i<k$. (prime power cyclic group direct product) The type was defined to be the k tuple $(p_1)^{r_1}, ...,(p_k)^{r_k})$. I guess for $$(\mathbb{Z}/32\mathbb{Z}$$ the proper subgroups are of order 1,2,4,8,16. Since cyclic. Subgroups are isomorphic to $Z_{2},Z_{4},Z_{8},Z_{16}$ so i gess they are of type, (2), (4), (8), (16) ? #### Klaas van Aarsen ##### MHB Seeker Staff member The question is to identify isomorphism type for each proper subgroup of $(\mathbb{Z}/32\mathbb{Z})^{\times }$. (what's the "isomorphism type" means? Does the question mean we need to list all the ismorphism between of each subgroup and the respectively another group that is isomorphic to the subgroup? If so, what are they then? ) I believe that for each proper subgroup you need to identify a group that it is isomorphic to. For instance the sub group with 2 elements {1,17} is isomorphic to $C_2$. #### Klaas van Aarsen ##### MHB Seeker Staff member Note that there is more than one proper sub group with 4 elements. #### ianchenmu ##### Member All i found regarding type was if G is a finite group direct products of cyclic groups of order ${p_1}^{r_1},...,{p_k}^{r_k}$ with $p_i \leq p_k$ for $i<k$. (prime power cyclic group direct product) The type was defined to be the k tuple $(p_1)^{r_1}, ...,(p_k)^{r_k})$. I guess for $$(\mathbb{Z}/32\mathbb{Z}$$ the proper subgroups are of order 1,2,4,8,16. Since cyclic. Subgroups are isomorphic to $Z_{2},Z_{4},Z_{8},Z_{16}$ so i gess they are of type, (2), (4), (8), (16) ? It's $(\mathbb{Z}/32\mathbb{Z})^{\times }$, not $\mathbb{Z}/32\mathbb{Z}$ #### jakncoke ##### Active member It's $(\mathbb{Z}/32\mathbb{Z})^{\times }$, not $\mathbb{Z}/32\mathbb{Z}$ can you tell me what the cross represents? #### Klaas van Aarsen ##### MHB Seeker Staff member can you tell me what the cross represents? The cross represents "times". It's the set of whole numbers mod 32 with $\times$ as operation. In particular every element that does not have an inverse is removed from the set. In other words $(\mathbb Z/32\mathbb Z)^\times = (\{1, 3, 5, ..., 31\}, \times)$. It is also denoted as $(\mathbb Z/32\mathbb Z)^*$. You may be more familiar with for instance $\mathbb R^*$. #### ianchenmu ##### Member can you tell me what the cross represents? The question is to draw the complete lattice of subgroups of $(\mathbb{Z}/32\mathbb{Z})^{\times }$ and for each proper subgroup, identify the isomorphism type. (Accoding to definition, $(\mathbb{Z}/n\mathbb{Z})^{\times }=\left \{ {\bar{a}\in \mathbb{Z}/n\mathbb{Z}|(a,n)=1}\right \}$ so $(\mathbb{Z}/32\mathbb{Z})^{\times }=\left \{ {\overline{1},\overline{3},\overline{5},\overline{7},\overline{9},...,\overline{31}}\right \}$, but what then? Can these elements form any subgroup? how to draw the lattice and for each proper subgroup, identify the isomorphism type?) #### Klaas van Aarsen ##### MHB Seeker Staff member so $(\mathbb{Z}/32\mathbb{Z})^{\times }=\left \{ {\overline{1},\overline{3},\overline{5},\overline{7},\overline{9},...,\overline{31}}\right \}$, but what then? Can these elements form any subgroup? how to draw the lattice and for each proper subgroup, identify the isomorphism type?) Which sub group is generated by $\langle \overline{3} \rangle$? And what is $\langle \overline{9} \rangle$? And...? To identify each subgroup, start with 1 element and see what it generates. If there is still space, try to add a 2nd element and see what it generates. #### ianchenmu ##### Member Which sub group is generated by $\langle \overline{3} \rangle$? And what is $\langle \overline{9} \rangle$? And...? To identify each subgroup, start with 1 element and see what it generates. If there is still space, try to add a 2nd element and see what it generates. But $\langle \overline{3} \rangle$={$\overline{3}$,$\overline{9}$,$\overline{27}$,$\overline{17}$...}and $\langle \overline{3} \rangle$ includes every element in $(\mathbb{Z}/32\mathbb{Z})^\times$. How can we draw a lattice of subgroups of $(\mathbb{Z}/32\mathbb{Z})^\times$? #### Klaas van Aarsen ##### MHB Seeker Staff member But $\langle \overline{3} \rangle$={$\overline{3}$,$\overline{9}$,$\overline{27}$,$\overline{17}$...}and $\langle \overline{3} \rangle$ includes every element in $(\mathbb{Z}/32\mathbb{Z})^\times$. How can we draw a lattice of subgroups of $(\mathbb{Z}/32\mathbb{Z})^\times$? Yes. $\langle \overline{3} \rangle$ generates the entire group. But then $\overline{3}\cdot \overline{3} = \overline{9}$ won't... Edit: sorry, that's not true. See below. Last edited: #### ianchenmu ##### Member But $\langle \overline{3} \rangle$={$\overline{3}$,$\overline{9}$,$\overline{27}$,$\overline{17}$...}and $\langle \overline{3} \rangle$ includes every element in $(\mathbb{Z}/32\mathbb{Z})^\times$. How can we draw a lattice of subgroups of $(\mathbb{Z}/32\mathbb{Z})^\times$? Oh I got what you mean, $\langle \overline{9} \rangle$ has less element. and so on... so there exists subgroup relationships. Is that right? #### Klaas van Aarsen ##### MHB Seeker Staff member Oh I got what you mean, $\langle \overline{9} \rangle$ has less element. and so on... so there exists subgroup relationships. Is that right? Correct! So $\overline{9}$ is the first element that generates a proper subgroup. How big is it? To which group is it isomorphic? #### ianchenmu ##### Member Correct! So $\overline{9}$ is the first element that generates a proper subgroup. How big is it? To which group is it isomorphic? So do I need to compute $<\overline{a}>$ of each element $\overline{a}$ in $(\mathbb{Z}/32\mathbb{Z})^\times$ in order to get the lattice?...it's so much work. And I can't find the isomorphism type. And how to find this? #### ianchenmu ##### Member Yes. $\langle \overline{3} \rangle$ generates the entire group. But then $\overline{3}\cdot \overline{3} = \overline{9}$ won't... Wait! I computed that $\overline{3}$ has order 8 in $(\mathbb{Z}/32\mathbb{Z})^\times$! #### Klaas van Aarsen ##### MHB Seeker Staff member So do I need to compute $<\overline{a}>$ of each element $\overline{a}$ in $(\mathbb{Z}/32\mathbb{Z})^\times$ in order to get the lattice?...it's so much work. And I can't find the isomorphism type. And how to find this? Did you know that a cyclic group is a group that can be generated by 1 element? Btw, a lot of the elements will be equivalent to another one. For instance $9^{-1}$ will generate the same group as $9$. Wait! I computed that $\overline{3}$ has order 8 in $(\mathbb{Z}/32\mathbb{Z})^\times$! Right! (I overlooked that myself. ) #### ianchenmu ##### Member Did you know that a cyclic group is a group that can be generated by 1 element? Btw, a lot of the elements will be equivalent to another one. For instance $9^{-1}$ will generate the same group as $9$. So after computing, I found $<\bar{3}> =<\bar{11}>=<\bar{19}>=<\bar{27}>$. So can I say this is the isomorphism type? What they are isomorphic to? #### Klaas van Aarsen ##### MHB Seeker Staff member So after computing, I found $<\bar{3}> =<\bar{11}>=<\bar{19}>=<\bar{27}>$. So can I say this is the isomorphism type? Good! All of these groups have 8 elements. Moreover, they are generated by 1 element. This means that their isomorphism type is $C_8$ (or $Z_8$ or $\mathbb Z/8\mathbb Z$ depending on which notation you prefer). Last edited: #### jakncoke ##### Active member Ok. Groups of order 2 are cyclic so the only subgroups of order 2 correspond to elements of order 2. <15>,<17>,<31> Groups of order 4 can be either $Z_2 \times Z_2$ or $Z_4$ $Z_4$ corresponds to cyclic elements of order 4. so <7>,<9>,<23>,<25> Since <7> = <23> <9> = <25> <7>,<9> are our distinct subgroups isomoprhic to $Z_4$. $Z_2 \times Z_2$ corresponds to the direct product of subgroups generated by elements of order 2. So <15>$\times$<17> <15>$\times$<31> <31>$\times$<17> Now Groups of order 8 can be either $Z_8$, $Z_4 \times Z_2$, or $Z_2\times Z_2 \times Z_2$. It cannot be quaternions or $D_8$ because that group is not abelian while ours clearly is. so $Z_8$ = <3>,<11>,<19>,<27> $Z_4 \times Z_2$ is the <7> $\times$ <15> <9> $\times$ <15> <7> $\times$ <17> <9> $\times$ <17> <7> $\times$ <31> <9> $\times$ <31> so lastly $Z_2 \times Z_2 \times Z_2$ basically <15> $\times$ <17> $\times$ <31> #### ianchenmu ##### Member I am considering will any two , or three (or four possibly...)elements from $\bar{1},\bar{3},\bar{5},\bar{7}...\bar{31}$ will generate a subgroup of order m, m<32. Is this possible?And what's the order of the formed groups can be? As I searched, {$\bar{3},\bar{31}$} generates $(\mathbb{Z}/32\mathbb{Z})^\times$ (see this page: Multiplicative group of integers modulo n - Wikipedia, the free encyclopedia) Last edited: #### jakncoke ##### Active member I am considering will any two , or three (or four possibly...)elements from $\bar{1},\bar{3},\bar{5},\bar{7}...\bar{31}$ will generate a subgroup of order m, m<32. Is this possible?And what's the order of the formed groups can be? As I searched, {$\bar{3},\bar{31}$} generates $(\mathbb{Z}/32\mathbb{Z})^\times$ (see this page: Multiplicative group of integers modulo n - Wikipedia, the free encyclopedia) Since the order of every subgroup divides the order of the group $$2^{4}$$ the only possible orders are 1,2,4,8,16 for subgroup sizes. I'm not sure if you are asking about generating sets of groups. Generating set of a group - Wikipedia, the free encyclopedia #### ianchenmu ##### Member Ok. Groups of order 2 are cyclic so the only subgroups of order 2 correspond to elements of order 2. <15>,<17>,<31> Groups of order 4 can be either $Z_2 \times Z_2$ or $Z_4$ $Z_4$ corresponds to cyclic elements of order 4. so <7>,<9>,<23>,<25> Since <7> = <23> <9> = <25> <7>,<9> are our distinct subgroups isomoprhic to $Z_4$. $Z_2 \times Z_2$ corresponds to the direct product of subgroups generated by elements of order 2. So <15>$\times$<17> <15>$\times$<31> <31>$\times$<17> Now Groups of order 8 can be either $Z_8$, $Z_4 \times Z_2$, or $Z_2\times Z_2 \times Z_2$. It cannot be quaternions or $D_8$ because that group is not abelian while ours clearly is. so $Z_8$ = <3>,<11>,<19>,<27> $Z_4 \times Z_2$ is the <7> $\times$ <15> <9> $\times$ <15> <7> $\times$ <17> <9> $\times$ <17> <7> $\times$ <31> <9> $\times$ <31> so lastly $Z_2 \times Z_2 \times Z_2$ basically <15> $\times$ <17> $\times$ <31> What $\times$ here means? Is that the subgroup is generated by the two element before and after $\times$? And what about subgroup of order 16? #### ianchenmu ##### Member Since the order of every subgroup divides the order of the group $$2^{4}$$ the only possible orders are 1,2,4,8,16 for subgroup sizes. I'm not sure if you are asking about generating sets of groups. Generating set of a group - Wikipedia, the free encyclopedia I mean, how could {$\bar{3},\bar{31}$} generate $\mathbb{Z}/32\mathbb{Z}^\times$? And for example, what's $<\bar{15},\bar{17}>$ and what's $<\bar{7},\bar{15}>$? #### Klaas van Aarsen ##### MHB Seeker Staff member What $\times$ here means? Is that the subgroup is generated by the two element before and after $\times$? And what about subgroup of order 16? The symbol $\times$ between sets is the so called cartesian product. It forms a new set. Its elements are ordered pairs. For instance $Z_3 \times Z_2$ is the set {(0,0), (0,1), (1,0), (1,1), (2,0), (2,1)}. The order of $(\mathbb Z / 32 \mathbb Z)^\times$ is 16. The subgroup of order 16 is not a proper subgroup which your problem statement required. #### Klaas van Aarsen ##### MHB Seeker Staff member I mean, how could {$\bar{3},\bar{31}$} generate $\mathbb{Z}/32\mathbb{Z}^\times$? And for example, what's $<\bar{15},\bar{17}>$ and what's $<\bar{7},\bar{15}>$? I do not understand your question. The group $<\bar{15},\bar{17}>$ is the group generated by 15 and 17. It contains $15$ and $17$ and it also contains each element when multiplied (repeatedly) by either $15$, $17$, $15^{-1}$, or $17^{-1}$.
{}
If it took Carlos ~$\frac{1}{2}$~ hour to cycle from his house to the library yesterday, was the distance that he cycled greater than 6 miles? (Note: 1 mile = 5,280 feet) (1) The average speed at which Carlos cycled from his house to the library yesterday was greater than 16 feet per second. (2) The average speed at which Carlos cycled from his house to the library yesterday was less than 18 feet per second. Statement (1) ALONE is sufficient, but statement (2) alone is not sufficient. Statement (2) ALONE is sufficient, but statement (1) alone is not sufficient. BOTH statements TOGETHER are sufficient, but NEITHER statement ALONE is sufficient. EACH statement ALONE is sufficient. Statements (1) and (2) TOGETHER are NOT sufficient.
{}
2022-02-21 Dr. Genki Hosono gave us a stimulating talk on pluripotential and $L^2$ methods in complex geometry. The talk was carefully designed not only for non-mathematicians but also for experts around the topic. He began his talk with the definition and basic properties of subharmonic function and its multivariable version in complex geometry: plurisubharmonic function. He then introduced Bergman kernel and explained a variational approach to Ohsawa-Takegoshi $L^2$ extension theorem, which is an extension theorem of holomorphic function with a bound on $L^2$ norm weighted by a plurisubharmonic function. Finally he explained Deng-Wang-Zhang-Zhou’s result on a `reverse direction’ of Ohsawa-Takegoshi theorem and his result with Inayama on a variant result. His explanations were very clear and quite valuable for us. Reported by Eiji Inoue
{}
2014 03-16 lazy gege Gege hasn’t tidied his desk for long,now his desk is full of things. This morning Gege bought a notebook,while to find somewhise to put it troubles him. He wants to tidy a small area of the desk, leaving an empty area, and put the notebook there, the notebook shouldn’t fall off the desk when putting there. The desk is a square and the notebook is a rectangle, area of the desk may be smaller than the notebook. here’re two possible conditions: Can you tell Gege the smallest area he must tidy to put his notebook? T(T<=100) in the first line is the case number. The next T lines each has 3 real numbers, L,A,B(0< L,A,B <= 1000). L is the side length of the square desk. A,B is length and width of the rectangle notebook. T(T<=100) in the first line is the case number. The next T lines each has 3 real numbers, L,A,B(0< L,A,B <= 1000). L is the side length of the square desk. A,B is length and width of the rectangle notebook. 3 10.1 20 10 3.0 20 10 30.5 20.4 19.6 25.0000 9.0000 96.0400 /* 2013-04-22 */ #include"stdio.h" #include"math.h" int main() { int t; double l,a,b,tem; scanf("%d",&t); while(t--) { scanf("%lf%lf%lf",&l,&a,&b); if(a>b) { tem=a; a=b; b=tem; } if(sqrt(l*l+l*l)<a/2) printf("%.4f\n",l*l); else if(a/2<sqrt(l*l+l*l)/2) { printf("%.4f\n",a*a/4); } else { printf("%.4f\n",l*l-(sqrt(l*l*2)-a/2)*(sqrt(l*l*2)-a/2)); } } return 0; } 1. 我还有个问题想请教一下,就是感觉对于新手来说,递归理解起来有些困难,不知有没有什么好的方法或者什么好的建议?
{}
# Comparison of Falcon R5 processors verse R4¶ Recently IBM Quantum announced the move to revision 5 (R5) of its Falcon processors see this tweet from Jay Gambetta. In particular it was highlighted that there is a 8x reduction in meausrement time on these systems. Lets see if this, or any other enhancements, are visible from the system calibration data. ## Summary¶ The highlight of the recently released Falcon R5 “core” systems is their much improved measurement times (7x) and error rates (2x). On these systems a measurement is roughly twice as long as a CNOT gate, compared to 13x on the old R4 systems, and allows for implimenting high-fidelity dynamic circuits with resets, mid-circuit measurements, and eventually classically-conditioned gates. For other tasks, the modest improvements in the CNOT gate errors and $$T_{1}$$ times are also welcomed. ## Frontmatter¶ import numpy as np from qiskit import * import matplotlib.pyplot as plt plt.style.use('nonhermitian') ## Load account and backend selection¶ Loading account and making two lists; one for R5 backends and the other for R4. Which is which can be found on the systems page. IBMQ.load_account(); provider = IBMQ.get_provider(project='internal-test') r5_backends = ['ibmq_kolkata', 'ibm_hanoi', 'ibm_kawasaki', 'ibm_cairo', 'ibm_auckland'] r4_backends = ['ibmq_montreal', 'ibmq_dublin', 'ibmq_toronto', 'ibmq_sydney'] ## Get the calibration data¶ Here we make a function to get the calibration data and grab the results for our targeted machines. def backends_data(backends): """Return backend calibration data for a list of backends. Parameters: backends (list): A list of backend names. Returns: list: cx gate errors list: cx gate times list: meas errors list: meas times list: T1 values list: T2 values """ cx_gate_errors = [] cx_gate_times = [] meas_errors = [] meas_times = [] t1s = [] t2s = [] for back in backends: backend = provider.get_backend(back) props = backend.properties() for gate in props.gates: if 'cx' in gate.name: if gate.parameters[0].value != 1.0: cx_gate_errors.append(gate.parameters[0].value) cx_gate_times.append(gate.parameters[1].value) for qubit in props.qubits: for item in qubit: meas_errors.append(item.value) meas_times.append(item.value) elif item.name == 'T1': t1s.append(item.value) elif item.name == 'T2': t2s.append(item.value) return cx_gate_errors, cx_gate_times, meas_errors, meas_times, t1s, t2s r5_data = backends_data(r5_backends) r4_data = backends_data(r4_backends) ## Plot results¶ Here we compute the improvement of R5 over R4, if any, and plot it in a bar plot. improve = [np.median(r4_data[kk])/np.median(r5_data[kk]) for kk in range(4)] improve.extend([np.median(r5_data[-2])/np.median(r4_data[-2]), np.median(r5_data[-1])/np.median(r4_data[-1])]) names = ['cx_error', 'cx_speed', 'meas_error', 'meas_speed', '$\mathrm{T}_{1}$', '$\mathrm{T}_{2}$'] fig, ax = plt.subplots(figsize=(8,6)) bars = ax.barh(names, improve) ax.axvline(1, color='0.3', linestyle='dashed', lw=2) ax.set_title('Falcon R5 characteristics verses R4') ax.set_xlabel('Improvement') ax.set_xlim([0, 8]) for bar in bars: width = bar.get_width() plt.text(bar.get_width()+0.3, bar.get_y()+0.5*bar.get_height(), '%sx' % np.round(width, 1), ha='center', va='center',fontsize=12, weight="semibold", color='0.3') Ok cool, we do indeed see a roughly 7x improvment in the median measurement time, as well as an over 2x reduction in the associated measurement error. Not only that, there is a modest improvement in the CNOT gate speed and error rates as well, with a bit better $$T_{1}$$ to round out the gains. How was this accomplished? Well the measurement gains come from a combination of increased measurement cavity linewidth ($$\kappa$$) due to stronger coupling, as well an increased in the dispersive shift ($$\chi$$) that aids in visibility. The improvements in the CNOT gates comes from further refinements in how the gates are performed. This reduction in measurement time, and error, is critical for implimenting dynamic circuits with qubit reset, mid-circuit measurements, and eventually conditional logic. Ideally, measurement should not take any longer than any gate, and we can look to see how the R5 measurement times compare to the CNOT gate times: print('Median R5 CNOT gate time: ', np.median(r5_data[1])) print('Median R5 measurement time:', np.median(r5_data[3])) Median R5 CNOT gate time: 348.4444444444444 Median R5 measurement time: 732.4444444444445 We see that a measurement takes about twice as long as a CNOT gate on the R5 systems. Thus, the R5 systens achieve the goal of high-fidelity readout on a timescale no longe than the typical gate times on the system. What about the old R4 systems? print('Median R4 CNOT gate time: ', np.median(r4_data[1])) print('Median R4 measurement time:', np.median(r4_data[3])) Median R4 CNOT gate time: 419.55555555555554 Median R4 measurement time: 5276.444444444443 Ouch! Each CNOT gate on an R4 system is roughly 13 CNOT gates in terms of time. Not a good situation to be in when you need to measure and reset a qubit mid-circuit. Indeed, trying to do so on the R4 systems quickly leads to headaches. For example, you can try to play with the dynamic Bernstein-Vazirani example using an R5 (as done there) and comparing it to an R4 system.
{}
# American Institute of Mathematical Sciences July  2009, 12(1): 109-131. doi: 10.3934/dcdsb.2009.12.109 ## Numerical computation of dichotomy rates and projectors in discrete time 1 Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld Received  September 2008 Revised  December 2008 Published  May 2009 We introduce a characterization of exponential dichotomies for linear difference equations that can be tested numerically and enables the approximation of dichotomy rates and projectors with high accuracy. The test is based on computing the bounded solutions of a specific inhomogeneous difference equation. For this task a boundary value and a least squares approach is applied. The results are illustrated using Hénon's map. We compute approximations of dichotomy rates and projectors of the variational equation, along a homoclinic orbit and an orbit on the attractor as well as for an almost periodic example. For the boundary value and the least squares approach, we analyze in detail errors that occur, when restricting the infinite dimensional problem to a finite interval. Citation: Thorsten Hüls. Numerical computation of dichotomy rates and projectors in discrete time. Discrete and Continuous Dynamical Systems - B, 2009, 12 (1) : 109-131. doi: 10.3934/dcdsb.2009.12.109 [1] Mihail Megan, Adina Luminiţa Sasu, Bogdan Sasu. Discrete admissibility and exponential dichotomy for evolution families. Discrete and Continuous Dynamical Systems, 2003, 9 (2) : 383-397. doi: 10.3934/dcds.2003.9.383 [2] Zhuoyi Xu, Yong Xia, Deren Han. On box-constrained total least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 439-449. doi: 10.3934/naco.2020043 [3] Christian Pötzsche. Dichotomy spectra of triangular equations. Discrete and Continuous Dynamical Systems, 2016, 36 (1) : 423-450. doi: 10.3934/dcds.2016.36.423 [4] Éder Rítis Aragão Costa. An extension of the concept of exponential dichotomy in Fréchet spaces which is stable under perturbation. Communications on Pure and Applied Analysis, 2019, 18 (2) : 845-868. doi: 10.3934/cpaa.2019041 [5] Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete and Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569 [6] Shaoyong Lai, Yong Hong Wu, Xu Yang. The global solution of an initial boundary value problem for the damped Boussinesq equation. Communications on Pure and Applied Analysis, 2004, 3 (2) : 319-328. doi: 10.3934/cpaa.2004.3.319 [7] António J.G. Bento, Nicolae Lupa, Mihail Megan, César M. Silva. Integral conditions for nonuniform $μ$-dichotomy on the half-line. Discrete and Continuous Dynamical Systems - B, 2017, 22 (8) : 3063-3077. doi: 10.3934/dcdsb.2017163 [8] Kristin Dettmers, Robert Giza, Rafael Morales, John A. Rock, Christina Knox. A survey of complex dimensions, measurability, and the lattice/nonlattice dichotomy. Discrete and Continuous Dynamical Systems - S, 2017, 10 (2) : 213-240. doi: 10.3934/dcdss.2017011 [9] Nguyen Thieu Huy, Ngo Quy Dang. Dichotomy and periodic solutions to partial functional differential equations. Discrete and Continuous Dynamical Systems - B, 2017, 22 (8) : 3127-3144. doi: 10.3934/dcdsb.2017167 [10] Yu Tian, John R. Graef, Lingju Kong, Min Wang. Existence of solutions to a multi-point boundary value problem for a second order differential system via the dual least action principle. Conference Publications, 2013, 2013 (special) : 759-769. doi: 10.3934/proc.2013.2013.759 [11] Chengjin Li. Parameter-related projection-based iterative algorithm for a kind of generalized positive semidefinite least squares problem. Numerical Algebra, Control and Optimization, 2020, 10 (4) : 511-520. doi: 10.3934/naco.2020048 [12] Xiaoyun Cai, Liangwen Liao, Yongzhong Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete and Continuous Dynamical Systems - S, 2014, 7 (5) : 917-923. doi: 10.3934/dcdss.2014.7.917 [13] Peng Jiang. Unique global solution of an initial-boundary value problem to a diffusion approximation model in radiation hydrodynamics. Discrete and Continuous Dynamical Systems, 2015, 35 (7) : 3015-3037. doi: 10.3934/dcds.2015.35.3015 [14] Ciprian Preda. Discrete-time theorems for the dichotomy of one-parameter semigroups. Communications on Pure and Applied Analysis, 2008, 7 (2) : 457-463. doi: 10.3934/cpaa.2008.7.457 [15] Thai Son Doan, Martin Rasmussen, Peter E. Kloeden. The mean-square dichotomy spectrum and a bifurcation to a mean-square attractor. Discrete and Continuous Dynamical Systems - B, 2015, 20 (3) : 875-887. doi: 10.3934/dcdsb.2015.20.875 [16] Bassam Fayad, A. Windsor. A dichotomy between discrete and continuous spectrum for a class of special flows over rotations. Journal of Modern Dynamics, 2007, 1 (1) : 107-122. doi: 10.3934/jmd.2007.1.107 [17] Le Viet Cuong, Thai Son Doan. Assignability of dichotomy spectra for discrete time-varying linear control systems. Discrete and Continuous Dynamical Systems - B, 2020, 25 (9) : 3597-3607. doi: 10.3934/dcdsb.2020074 [18] Álvaro Castañeda, Gonzalo Robledo. Dichotomy spectrum and almost topological conjugacy on nonautonomus unbounded difference systems. Discrete and Continuous Dynamical Systems, 2018, 38 (5) : 2287-2304. doi: 10.3934/dcds.2018094 [19] Jingli Ren, Dandan Zhu, Haiyan Wang. Spreading-vanishing dichotomy in information diffusion in online social networks with intervention. Discrete and Continuous Dynamical Systems - B, 2019, 24 (4) : 1843-1865. doi: 10.3934/dcdsb.2018240 [20] Chris Good, Robert Leek, Joel Mitchell. Equicontinuity, transitivity and sensitivity: The Auslander-Yorke dichotomy revisited. Discrete and Continuous Dynamical Systems, 2020, 40 (4) : 2441-2474. doi: 10.3934/dcds.2020121 2020 Impact Factor: 1.327
{}
# A 12.0 L sample of argon gas has a pressure of 28.0 atm. What volume would this gas occupy at 9.70 atm? Jun 6, 2016 The volume that this gas occupies is $34.6 L$ #### Explanation: Let's start off with identifying our known and unknown variables. The first volume we have is 12.0 L, the first pressure is 28.0 atm, and the second pressure is 9.70 atm. Our only unknown is the second volume. We can obtain the answer using Boyle's Law which shows that there is an inverse relationship between pressure and volume as long as the temperature and number of moles remain constant. The equation we use is ${P}_{1} {V}_{1} = {P}_{2} {V}_{2}$ where the numbers 1 and 2 represent the first and second conditions. All we have to do is rearrange the equation to solve for the volume. We do this by dividing both sides by ${P}_{2}$ in order to get ${V}_{2}$ by itself like so: ${P}_{1} {V}_{1}$ $\div$${P}_{2}$ = ${V}_{2}$ Now all we do is plug and chug! (28.0 cancel (atm)) (12.0 L) $\div$ (9.70 cancel (atm)) = 34.6 L Jun 6, 2016 ${V}_{2} \equiv 34.6 L$ #### Explanation: Use the combined gas low equation. $\frac{{P}_{1} \times {V}_{1}}{T} _ 1 = \frac{{P}_{2} \times {V}_{2}}{T} _ 2$ ${T}_{1} = {T}_{2}$ since the temperature remains constant. ${P}_{1} \times {V}_{1} = {P}_{2} \times {V}_{2}$ ${V}_{2} = \frac{{P}_{1} \times {V}_{1}}{P} _ 2$ ${V}_{2} = \frac{28.0 \setminus a t m \times 12.0 \setminus L}{9.70 \setminus a t m}$ ${V}_{2} \equiv 34.6 L$
{}
# Analyzing a lammps trajectory¶ In this example, a lammps trajectory in dump-text format will be read in, and Steinhardt’s parameters will be calculated. Earlier versions of this tutorial used traj_process module. It is now updated to work with the more efficient trajectory module. import pyscal.core as pc import os from pyscal.trajectory import Trajectory import matplotlib.pyplot as plt import numpy as np First, we will use the Trajectory class to load the trajectory. traj = Trajectory("traj.light") traj Trajectory of 10 slices with 500 atoms Now we can make a small function which reads slice and calculates $$q_6$$ values. def calculate_q6(timeslice, format="lammps-dump"): sys = timeslice.to_system()[0] sys.find_neighbors(method="cutoff", cutoff=0) sys.calculate_q(6) q6 = sys.get_qvals(6) return q6 There are a couple of things of interest in the above function. The find_neighbors method finds the neighbors of the individual atoms. Here, an adaptive method is used, but, one can also use a fixed cutoff or Voronoi tessellation. Also only the unaveraged $$q_6$$ values are calculated above. The averaged ones can be calculate using the averaged=True keyword in both calculate_q and get_qvals method. Now we can simply call the function for each file.. q6s = [calculate_q6(traj[x]) for x in range(traj.nblocks)] We can now visualise the calculated values plt.plot(np.hstack(q6s)) [<matplotlib.lines.Line2D at 0x7f230fdeab80>] We will now modify the above function to also find clusters which satisfy particular $$q_6$$ value. But first, for a single file. sys = traj[0].to_system()[0] sys.find_neighbors(method="cutoff", cutoff=0) sys.calculate_q(6) Now a clustering algorithm can be applied on top using the cluster_atoms method. cluster_atoms uses a condition as argument which should give a True/False value for each atom. Lets define a condition. def condition(atom): return atom.get_q(6) > 0.5 The above function returns True for any atom which has a $$q_6$$ value greater than 0.5 and False otherwise. Now we can call the cluster_atoms method. sys.cluster_atoms(condition) 16 The method returns 16, which here is the size of the largest cluster of atoms which have $$q_6$$ value of 0.5 or higher. If information about all clusters are required, that can also be accessed. atoms = sys.atoms atom.cluster gives the number of the cluster that each atom belongs to. If the value is -1, the atom does not belong to any cluster, that is, the clustering condition was not met. clusters = [atom.cluster for atom in atoms if atom.cluster != -1] Now we can see how many unique clusters are there, and what their sizes are. unique_clusters, counts = np.unique(clusters, return_counts=True) counts contain all the necessary information. len(counts) will give the number of unique clusters. plt.bar(range(len(counts)), counts) plt.ylabel("Number of atoms in cluster") plt.xlabel("Cluster ID") Text(0.5, 0, 'Cluster ID') Now we can finally put all of these together into a single function and run it over our individual time slices. def calculate_q6_cluster(timeslice, cutoff_q6 = 0.5, format="lammps-dump"): sys = timeslice.to_system()[0] sys.find_neighbors(method="cutoff", cutoff=0) sys.calculate_q(6) def _condition(atom): return atom.get_q(6) > cutoff_q6 sys.cluster_atoms(condition) atoms = sys.atoms clusters = [atom.cluster for atom in atoms if atom.cluster != -1] unique_clusters, counts = np.unique(clusters, return_counts=True) return counts q6clusters = [calculate_q6_cluster(traj[x]) for x in range(traj.nblocks)] We can plot the number of clusters for each slice plt.plot(range(len(q6clusters)), [len(x) for x in q6clusters], 'o-') plt.xlabel("Time slice") plt.ylabel("number of unique clusters") Text(0, 0.5, 'number of unique clusters') We can also plot the biggest cluster size plt.plot(range(len(q6clusters)), [max(x) for x in q6clusters], 'o-') plt.xlabel("Time slice") plt.ylabel("Largest cluster size") Text(0, 0.5, 'Largest cluster size') ## Using ASE¶ The above example can also done using ASE. The ASE read method needs to be imported. from ase.io import read traj = read("traj.light", format="lammps-dump-text", index=":") In the above function, index=":" tells ase to read the complete trajectory. The individual slices can now be accessed by indexing. traj[0] Atoms(symbols='H500', pbc=False, cell=[18.21922, 18.22509, 18.36899], momenta=...) We can use the same functions as above, but by specifying a different file format. q6clusters_ase = [calculate_q6_cluster(x, format="ase") for x in traj] We will plot and compare with the results from before, plt.plot(range(len(q6clusters_ase)), [max(x) for x in q6clusters_ase], 'o-') plt.xlabel("Time slice") plt.ylabel("Largest cluster size") Text(0, 0.5, 'Largest cluster size') As expected, the results are identical for both calculations!
{}
Under the auspices of the Computational Complexity Foundation (CCF) REPORTS > DETAIL: ### Paper: TR11-112 | 10th August 2011 15:09 #### The Projection Games Conjecture and The NP-Hardness of ln n-Approximating Set-Cover TR11-112 Authors: Dana Moshkovitz Publication: 10th August 2011 16:04
{}
3 Fixed some wording and corrected statement of Whitney's Theorem Let me remark on the $C^m$ version of the question for $m \geq 1$. A classic result of H. Whitney from Differentiable functions defined in closed sets I, Transactions A.M.S. 36 (1934), 369–387 reads as follows: Suppose that $f : \lbrace x_1, x_2 ,\cdots\rbrace \rightarrow \mathbb{R}$ is given, with $x_k$ some convergent increasing sequence. Suppose that the $k$'th divided difference quotient based at $x_n$, defined inductively by $$\Delta_{n,n+k+1} f : = \frac{\Delta_{n,n+k} f - \Delta_{n+1,n+k+1} f}{x_{n} - x_{n+k+1}}$$ and $$\Delta_{n,n} f = f(x_n)$$ satisfies $|\Delta_{n,n+k+1} f| \leq \Lambda$ uniformly in $n \geq 1$ and $0 \leq k \leq m$ for some $\Lambda < \infty$. In addition, suppose that $$\Delta_{n,n+k+1} f \rightarrow c_k$$ as $n \rightarrow \infty$ for each $0 \leq k \leq m$. Then there exists a $C^m$ function $g: \mathbb{R} \rightarrow \mathbb{R}$ with $g(x_n) = f(x_n)$ for all $n \geq 1$ and $$\|g\|_{C^m} \leq C(m) \Lambda.$$ In Whitney's original paper, this theorem was proven in greater generality for $f : E \rightarrow \mathbb{R}$ and $E \subset \mathbb{R}$ arbitrary and closed instead of the special case where $E$ consists of a sequence with a single limit point. I believe that a similar constructive characterization is known for the $C^\infty$ case under the assumption that $\lbrace x_k\rbrace$ is quickly decreasing in the sense that $P(k) |x_k| \rightarrow 0$ for every polynomial $P(k)$. Even for the example of $x_n=-1/n$ I do not know the answer for a general $f$. For quickly decreasing sequences $\lbrace x_k \rbrace$ a recent result used in work of Charles Fefferman and Fulvio Ricci gives a characterization for which functions $f: \lbrace x_k \rbrace \rightarrow \mathbb{R}$ can be extended into $C^\infty(\mathbb{R})$. Unfortunately, I learned of this at a recent conference and I cannot locate the preprint. I am also not sure of the relationship between their result and Andrew's answer, since polygonal curves do not arise as are never graphs of $C^\infty$ functions (or even $C^1$ functions). 2 added 121 characters in body Let me remark on the $C^m$ version of the question for $m \geq 1$. A classic result of H. Whitney from Differentiable functions defined in closed sets I, Transactions A.M.S. 36 (1934), 369–387 reads as follows: Suppose that $f : \lbrace x_1, x_2 ,\cdots\rbrace \rightarrow \mathbb{R}$ is given, with $x_k$ some convergent increasing sequence. Suppose that the $k$'th divided difference quotient based at $x_n$, defined inductively by $$\Delta_{n,n+k+1} f : = \frac{\Delta_{n,n+k} f - \Delta_{n+1,n+k+1} f}{x_{n} - x_{n+k+1}}$$ and $$\Delta_{n,n} f = f(x_n)$$ satisfies $|\Delta_{n,n+k+1} f| \leq \Lambda$ uniformly in $n \geq 1$ and $0 \leq k \leq m$ for some $\Lambda < \infty$. In addition, suppose that $$\Delta_{n,n+k+1} f \rightarrow c_k$$ as $n \rightarrow \infty$ for each $0 \leq k \leq m$. Then there exists a $C^m$ function $g: \mathbb{R} \rightarrow \mathbb{R}$ with $g(x_n) = f(x_n)$ for all $n \geq 1$ and $$\|g\|_{C^m} \leq C(m) \Lambda.$$ In Whitney's original paper, this theorem was proven in greater generality for $f : E \rightarrow \mathbb{R}$ and $E \subset \mathbb{R}$ arbitrary and closed instead of the special case where $E$ consists of a sequence with a single limit point. I believe that a similar constructive characterization is known for the $C^\infty$ case under the assumption that $\lbrace x_k\rbrace$ is quickly decreasing in the sense that $P(k) |x_k| \rightarrow 0$ for every polynomial $P(k)$. Even for the example of $x_n=-1/n$ I do not know the answer for a general $f$. For quickly decreasing sequences $\lbrace x_k \rbrace$ a recent result used in work of Charles Fefferman and Fulvio Ricci gives a characterization for which functions $f: \lbrace x_k \rbrace \rightarrow \mathbb{R}$ can be extended into $C^\infty(\mathbb{R})$. Unfortunately, I learned of this at a recent conference and I cannot locate the preprint. I am also not sure of the relationship between their result and Andrew's answer, since polygonal curves do not arise as graphs of $C^\infty$ functions (or even $C^1$ functions). 1 Let me remark on the $C^m$ version of the question for $m \geq 1$. A classic result of H. Whitney from Differentiable functions defined in closed sets I, Transactions A.M.S. 36 (1934), 369–387 reads as follows: Suppose that $f : \lbrace x_1, x_2 ,\cdots\rbrace \rightarrow \mathbb{R}$ is given, with $x_k$ some convergent increasing sequence. Suppose that the $k$'th divided difference quotient based at $x_n$, defined inductively by $$\Delta_{n,n+k+1} f : = \frac{\Delta_{n,n+k} f - \Delta_{n+1,n+k+1} f}{x_{n} - x_{n+k+1}}$$ and $$\Delta_{n,n} f = f(x_n)$$ satisfies $|\Delta_{n,n+k+1} f| \leq \Lambda$ uniformly in $n \geq 1$ and $0 \leq k \leq m$ for some $\Lambda < \infty$. Then there exists a $C^m$ function $g: \mathbb{R} \rightarrow \mathbb{R}$ with $g(x_n) = f(x_n)$ for all $n \geq 1$ and $$\|g\|_{C^m} \leq C(m) \Lambda.$$ In Whitney's original paper, this theorem was proven in greater generality for $f : E \rightarrow \mathbb{R}$ and $E \subset \mathbb{R}$ arbitrary and closed instead of the special case where $E$ consists of a sequence with a single limit point. I believe that a similar constructive characterization is known for the $C^\infty$ case under the assumption that $\lbrace x_k\rbrace$ is quickly decreasing in the sense that $P(k) |x_k| \rightarrow 0$ for every polynomial $P(k)$. Even for the example of $x_n=-1/n$ I do not know the answer for a general $f$. For quickly decreasing sequences $\lbrace x_k \rbrace$ a recent result used in work of Charles Fefferman and Fulvio Ricci gives a characterization for which functions $f: \lbrace x_k \rbrace \rightarrow \mathbb{R}$ can be extended into $C^\infty(\mathbb{R})$. Unfortunately, I learned of this at a recent conference and I cannot locate the preprint. I am also not sure of the relationship between their result and Andrew's answer, since polygonal curves do not arise as graphs of $C^\infty$ functions (or even $C^1$ functions).
{}
# Definite Integral of Uniformly Convergent Series of Continuous Functions ## Theorem Let $\sequence {f_n}$ be a sequence of real functions. Let each of $\sequence {f_n}$ be continuous on the interval $\hointr a b$. Let the series: $\ds \map f x := \sum_{n \mathop = 1}^\infty \map {f_n} x$ be uniformly convergent for all $x \in \closedint a b$. Then: $\ds \int_a^b \map f x \rd x = \sum_{n \mathop = 1}^\infty \int_a^b \map {f_n} x \rd x$ ## Proof Define $\map {S_N} x = \ds \sum_{n \mathop = 1}^N \map {f_n} x$. We have: $\ds \size {\int_a^b \map f x \rd x - \sum_{n \mathop = 1}^N \int_a^b \map {f_n} x \rd x}$ $=$ $\ds \size {\int_a^b \paren {\map f x - \map {S_N} x} \rd x}$ $\ds$ $\le$ $\ds \paren {b - a} \sup_{x \mathop \in \closedint a b} \size {\map f x - \map {S_N} x}$ $\ds$ $\to$ $\ds 0$ as $N \to +\infty$ $\blacksquare$
{}
# IB DP Maths Topic 9.4 Improper integrals of the type HL Paper 3 ## Question (a)     Using l’Hopital’s Rule, show that $$\mathop {\lim }\limits_{x \to \infty } x{{\text{e}}^{ – x}} = 0$$ . (b)     Determine $$\int_0^a {x{{\text{e}}^{ – x}}{\text{d}}x}$$ . (c)     Show that the integral $$\int_0^\infty {x{{\text{e}}^{ – x}}{\text{d}}x}$$ is convergent and find its value. ## Markscheme (a)     $$\mathop {\lim }\limits_{x \to \infty } \frac{x}{{{{\text{e}}^x}}} = \mathop {\lim }\limits_{x \to \infty } \frac{1}{{{{\text{e}}^x}}}$$     M1A1 = 0     AG [2 marks] (b)     Using integration by parts     M1 $$\int_0^a {x{{\text{e}}^{ – x}}{\text{d}}x} = \left[ { – x{{\text{e}}^{ – x}}} \right]_0^a + \int_0^a {{{\text{e}}^{ – x}}{\text{d}}x}$$     A1A1 $$= – a{{\text{e}}^{ – a}} – \left[ {{e^{ – x}}} \right]_0^a$$     A1 $$= 1 – a{{\text{e}}^{ – a}} – {{\text{e}}^{ – a}}$$     A1 [5 marks] (c)     Since $${{\text{e}}^{ – a}}$$ and $$a{{\text{e}}^{ – a}}$$ are both convergent (to zero), the integral is convergent.     R1 Its value is 1.     A1 [2 marks] Total [9 marks] ## Examiners report Most candidates made a reasonable attempt at (a). In (b), however, it was disappointing to note that some candidates were unable to use integration by parts to perform the integration. In (c), while many candidates obtained the correct value of the integral, proof of its convergence was often unconvincing. ## Question Find the exact value of $$\int_0^\infty {\frac{{{\text{d}}x}}{{(x + 2)(2x + 1)}}}$$. ## Markscheme Let $$\frac{1}{{(x + 2)(2x + 1)}} = \frac{A}{{x + 2}} + \frac{B}{{2x + 1}} = \frac{{A(2x + 1) + B(x + 2)}}{{(x + 2)(2x + 1)}}$$     M1A1 $$x = – 2 \to A = – \frac{1}{3}$$     A1 $$x = – \frac{1}{2} \to B = \frac{2}{3}$$     A1     N3 $$I = \frac{1}{3}\int_0^h {\left[ {\frac{2}{{(2x + 1)}} – \frac{1}{{(x + 2)}}} \right]{\text{d}}x}$$     M1 $$= \frac{1}{3}\left[ {\ln (2x + 1) – \ln (x + 2)} \right]_0^h$$     A1 $$= \frac{1}{3}\left[ {\mathop {\lim }\limits_{h \to \infty } \left( {\ln \left( {\frac{{2h + 1}}{{h + 2}}} \right)} \right) – \ln \frac{1}{2}} \right]$$     A1 $$= \frac{1}{3}\left( {\ln 2 – \ln \frac{1}{2}} \right)$$     A1 $$= \frac{2}{3}\ln 2$$     A1 Note: If the logarithms are not combined in the third from last line the last three A1 marks cannot be awarded. Total [9 marks] ## Examiners report Not a difficult question but combination of the logarithms obtained by integration was often replaced by a spurious argument with infinities to get an answer. $$\log (\infty + 1)$$ was often seen. ## Question Find the set of values of k for which the improper integral $$\int_2^\infty {\frac{{{\text{d}}x}}{{x{{(\ln x)}^k}}}}$$ converges. [6] a. Show that the series $$\sum\limits_{r = 2}^\infty {\frac{{{{( – 1)}^r}}}{{r\ln r}}}$$ is convergent but not absolutely convergent. [5] b. ## Markscheme consider the limit as $$R \to \infty$$ of the (proper) integral $$\int_2^R {\frac{{{\text{d}}x}}{{x{{(\ln x)}^k}}}}$$     (M1) substitute $$u = \ln x,{\text{ d}}u = \frac{1}{x}{\text{d}}x$$     (M1) obtain $$\int_{\ln 2}^{\ln R} {\frac{1}{{{u^k}}}{\text{d}}u = \left[ { – \frac{1}{{k – 1}}\frac{1}{{{u^{k – 1}}}}} \right]_{\ln 2}^{\ln R}}$$     A1 Note: Ignore incorrect limits or omission of limits at this stage. or $$[\ln u]_{\ln 2}^{\ln R}$$ if k = 1     A1 Note: Ignore incorrect limits or omission of limits at this stage. because $$\ln R{\text{ }}({\text{and }}\ln \ln R) \to \infty {\text{ as }}R \to \infty$$     (M1) converges in the limit if k > 1     A1 [6 marks] a. C: $${\text{terms}} \to 0{\text{ as }}r \to \infty$$     A1 $$\left| {{u_{r + 1}}} \right| < \left| {{u_r}} \right|$$ for all r     A1 convergence by alternating series test     R1 AC: $${(x\ln x)^{ – 1}}$$ is positive and decreasing on $$[2,\,\infty )$$     A1 not absolutely convergent by integral test using part (a) for k = 1     R1 [5 marks] b. ## Examiners report A good number of candidates were able to find the integral in part (a) although the vast majority did not consider separately the integral when k = 1. Many candidates did not explicitly set a limit for the integral to let this limit go to infinity in the anti – derivative and it seemed that some candidates were “substituting for infinity”. This did not always prevent candidates finding a correct final answer but the lack of good technique is a concern. In part (b) many candidates seemed to have some knowledge of the relevant test for convergence but this test was not always rigorously applied. In showing that the series was not absolutely convergent candidates were often not clear in showing that the function being tested had to meet a number of criteria and in so doing lost marks. a. A good number of candidates were able to find the integral in part (a) although the vast majority did not consider separately the integral when k = 1. Many candidates did not explicitly set a limit for the integral to let this limit go to infinity in the anti – derivative and it seemed that some candidates were “substituting for infinity”. This did not always prevent candidates finding a correct final answer but the lack of good technique is a concern. In part (b) many candidates seemed to have some knowledge of the relevant test for convergence but this test was not always rigorously applied. In showing that the series was not absolutely convergent candidates were often not clear in showing that the function being tested had to meet a number of criteria and in so doing lost marks. b. ## Question Prove that $$\mathop {\lim }\limits_{H \to \infty } \int_a^H {\frac{1}{{{x^2}}}{\text{d}}x}$$ exists and find its value in terms of $$a{\text{ (where }}a \in {\mathbb{R}^ + })$$. [3] a. Use the integral test to prove that $$\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}}$$ converges. [3] b. Let $$\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}} = L$$ . The diagram below shows the graph of $$y = \frac{1}{{{x^2}}}$$. (i)     Shade suitable regions on a copy of the diagram above and show that $$\sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \int_{k + 1}^\infty {\frac{1}{{{x^2}}}} {\text{d}}x < L$$ . (ii)     Similarly shade suitable regions on another copy of the diagram above and show that $$L < \sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \int_k^\infty {\frac{1}{{{x^2}}}} {\text{d}}x$$ . [6] c. Hence show that $$\sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \frac{1}{{k + 1}} < L < \sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \frac{1}{k}$$ [2] d. You are given that $$L = \frac{{{\pi ^2}}}{6}$$. By taking k = 4 , use the upper bound and lower bound for L to find an upper bound and lower bound for $$\pi$$ . Give your bounds to three significant figures. [3] e. ## Markscheme $$\mathop {\lim }\limits_{H \to \infty } \int_a^H {\frac{1}{{{x^2}}}{\text{d}}x} = \mathop {\lim }\limits_{H \to \infty } \left[ {\frac{{ – 1}}{x}} \right]_a^H$$     A1 $$\mathop {\lim }\limits_{H \to \infty } \left( {\frac{{ – 1}}{H} + \frac{1}{a}} \right)$$     A1 $$= \frac{1}{a}$$     A1 [3 marks] a. as $$\left\{ {\frac{1}{{{n^2}}}} \right\}$$ is a positive decreasing sequence we consider the function $$\frac{1}{{{x^2}}}$$ we look at $$\int_1^\infty {\frac{1}{{{x^2}}}} {\text{d}}x$$     M1 $$\int_1^\infty {\frac{1}{{{x^2}}}} {\text{d}}x = 1$$     A1 since this is finite (allow “limit exists” or equivalent statement)     R1 $$\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}}}}$$ converges     AG [3 marks] b. (i) correct start and finish points for rectangles     A1 since the area shaded is less that the area of the required staircase we have     R1 $$\sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \int_{k + 1}^\infty {\frac{1}{{{x^2}}}} {\text{d}}x < L$$     AG (ii) correct start and finish points for rectangles     A1 since the area shaded is greater that the area of the required staircase we have     R1 $$L < \sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \int_k^\infty {\frac{1}{{{x^2}}}} {\text{d}}x$$     AG Note: Alternative shading and rearranging of the inequality is acceptable. [6 marks] c. $$\int_{k + 1}^\infty {\frac{1}{{{x^2}}}} {\text{d}}x = \frac{1}{{k + 1}},{\text{ }}\int_k^\infty {\frac{1}{{{x^2}}}} {\text{d}}x = \frac{1}{k}$$     A1A1 $$\sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \frac{1}{{k + 1}} < L < \sum\limits_{n = 1}^k {\frac{1}{{{n^2}}}} + \frac{1}{k}$$     AG [2 marks] d. $$\frac{{205}}{{144}} + \frac{1}{5} < \frac{{{\pi ^2}}}{6} < \frac{{205}}{{144}} + \frac{1}{4}{\text{ }}\left( {1.6236… < \frac{{{\pi ^2}}}{6} < 1.6736…} \right)$$     A1 $$\sqrt {6\left( {\frac{{205}}{{144}} + \frac{1}{5}} \right)} < \pi < \sqrt {6\left( {\frac{{205}}{{144}} + \frac{1}{4}} \right)}$$     (M1) $$3.12 < \pi < 3.17$$     A1     N2 [3 marks] e. ## Examiners report Most candidates correctly obtained the result in part (a). Many then failed to realise that having obtained this result once it could then simply be stated when doing parts (b) and (d) a. Most candidates correctly obtained the result in part (a). Many then failed to realise that having obtained this result once it could then simply be stated when doing parts (b) and (d) In part (b) the calculation of the integral as equal to 1 only scored 2 of the 3 marks. The final mark was for stating that ‘because the value of the integral is finite (or ‘the limit exists’ or an equivalent statement) then the series converges. Quite a few candidates left out this phrase. b. Most candidates correctly obtained the result in part (a). Many then failed to realise that having obtained this result once it could then simply be stated when doing parts (b) and (d) Candidates found part (c) difficult. Very few drew the correct series of rectangles and some clearly had no idea of what was expected of them. c. Most candidates correctly obtained the result in part (a). Many then failed to realise that having obtained this result once it could then simply be stated when doing parts (b) and (d) d. Though part (e) could be done without doing any of the previous parts of the question many students were probably put off by the notation because only a minority attempted it. e. ## Question Consider the functions $$f$$ and $$g$$ given by $$f(x) = \frac{{{{\text{e}}^x} + {{\text{e}}^{ – x}}}}{2}{\text{ and }}g(x) = \frac{{{{\text{e}}^x} – {{\text{e}}^{ – x}}}}{2}$$. Show that $$f'(x) = g(x)$$ and $$g'(x) = f(x)$$. [2] a. Find the first three non-zero terms in the Maclaurin expansion of $$f(x)$$. [5] b. Hence find the value of $$\mathop {{\text{lim}}}\limits_{x \to 0} \frac{{1 – f(x)}}{{{x^2}}}$$. [3] c. Find the value of the improper integral $$\int_0^\infty {\frac{{g(x)}}{{{{\left[ {f(x)} \right]}^2}}}{\text{d}}x}$$. [6] d. ## Markscheme any correct step before the given answer     A1AG eg, $$f'(x) = \frac{{{{\left( {{{\text{e}}^x}} \right)}^\prime } + {{\left( {{{\text{e}}^{ – x}}} \right)}^\prime }}}{2} = \frac{{{{\text{e}}^x} – {{\text{e}}^{ – x}}}}{2} = g(x)$$ any correct step before the given answer     A1AG eg, $$g'(x) = \frac{{{{\left( {{{\text{e}}^x}} \right)}^\prime } – {{\left( {{{\text{e}}^{ – x}}} \right)}^\prime }}}{2} = \frac{{{{\text{e}}^x} + {{\text{e}}^{ – x}}}}{2} = f(x)$$ [2 marks] a. METHOD 1 statement and attempted use of the general Maclaurin expansion formula     (M1) $$f(0) = 1;{\text{ }}g(0) = 0$$ (or equivalent in terms of derivative values)   A1A1 $$f(x) = 1 + \frac{{{x^2}}}{2} + \frac{{{x^4}}}{{24}}$$ or $$f(x) = 1 + \frac{{{x^2}}}{{2!}} + \frac{{{x^4}}}{{4!}}$$     A1A1 METHOD 2 $${{\text{e}}^x} = 1 + x + \frac{{{x^2}}}{{2!}} + \frac{{{x^3}}}{{3!}} + \frac{{{x^4}}}{{4!}} + \ldots$$     A1 $${{\text{e}}^{ – x}} = 1 – x + \frac{{{x^2}}}{{2!}} – \frac{{{x^3}}}{{3!}} + \frac{{{x^4}}}{{4!}} + \ldots$$     A1 adding and dividing by 2     M1 $$f(x) = 1 + \frac{{{x^2}}}{2} + \frac{{{x^4}}}{{24}}$$ or $$f(x) = 1 + \frac{{{x^2}}}{{2!}} + \frac{{{x^4}}}{{4!}}$$     A1A1 Notes: Accept 1, $$\frac{{{x^2}}}{2}$$ and $$\frac{{{x^4}}}{{24}}$$ or 1, $$\frac{{{x^2}}}{{2!}}$$ and $$\frac{{{x^4}}}{{4!}}$$. Award A1 if two correct terms are seen. [5 marks] b. METHOD 1 attempted use of the Maclaurin expansion from (b)     M1 $$\mathop {{\text{lim}}}\limits_{x \to 0} \frac{{1 – f(x)}}{{{x^2}}} = \mathop {{\text{lim}}}\limits_{x \to 0} \frac{{1 – \left( {1 + \frac{{{x^2}}}{2} + \frac{{{x^4}}}{{24}} + \ldots } \right)}}{{{x^2}}}$$ $$\mathop {{\text{lim}}}\limits_{x \to 0} \left( { – \frac{1}{2} – \frac{{{x^2}}}{{24}} – \ldots } \right)$$     A1 $$= – \frac{1}{2}$$     A1 METHOD 2 attempted use of L’Hôpital and result from (a)     M1 $$\mathop {{\text{lim}}}\limits_{x \to 0} \frac{{1 – f(x)}}{{{x^2}}} = \mathop {{\text{lim}}}\limits_{x \to 0} \frac{{ – g(x)}}{{2x}}$$ $$\mathop {{\text{lim}}}\limits_{x \to 0} \frac{{ – f(x)}}{2}$$     A1 $$= – \frac{1}{2}$$     A1 [3 marks] c. METHOD 1 use of the substitution $$u = f(x)$$ and $$\left( {{\text{d}}u = g(x){\text{d}}x} \right)$$     (M1)(A1) attempt to integrate $$\int_1^\infty {\frac{{{\text{d}}u}}{{{u^2}}}}$$     (M1) obtain $$\left[ { – \frac{1}{u}} \right]_1^\infty$$ or $$\left[ { – \frac{1}{{f(x)}}} \right]_0^\infty$$     A1 recognition of an improper integral by use of a limit or statement saying the integral converges     R1 obtain 1     A1     N0 METHOD 2 $$\int_0^\infty {\frac{{\frac{{{{\text{e}}^x} – {{\text{e}}^{ – x}}}}{2}}}{{{{\left( {\frac{{{{\text{e}}^x} + {{\text{e}}^{ – x}}}}{2}} \right)}^2}}}{\text{d}}x = \int_0^\infty {\frac{{2\left( {{{\text{e}}^x} – {{\text{e}}^{ – x}}} \right)}}{{{{\left( {{{\text{e}}^x} + {{\text{e}}^{ – x}}} \right)}^2}}}{\text{d}}x} }$$     (M1) use of the substitution $$u = {{\text{e}}^x} + {{\text{e}}^{ – x}}$$ and $$\left( {{\text{d}}u = {{\text{e}}^x} – {{\text{e}}^{ – x}}{\text{d}}x} \right)$$     (M1) attempt to integrate $$\int_2^\infty {\frac{{2{\text{d}}u}}{{{u^2}}}}$$     (M1) obtain $$\left[ { – \frac{2}{u}} \right]_2^\infty$$     A1 recognition of an improper integral by use of a limit or statement saying the integral converges     R1 obtain 1     A1     N0 [6 marks] d. [N/A] a. [N/A] b. [N/A] c. [N/A] d. ## Question A function $$f$$ is given by $$f(x) = \int_0^x {\ln (2 + \sin t){\text{d}}t}$$. Write down $$f'(x)$$. [1] a. By differentiating $$f({x^2})$$, obtain an expression for the derivative of $$\int_0^{{x^2}} {\ln (2 + \sin t){\text{d}}t}$$ with respect to $$x$$. [3] b. Hence obtain an expression for the derivative of $$\int_x^{{x^2}} {\ln (2 + \sin t){\text{d}}t}$$ with respect to $$x$$. [3] c. ## Markscheme $$\ln (2 + \sin x)$$    A1 Note:     Do not accept $$\ln (2 + \sin t)$$. [1 mark] a. attempt to use chain rule     (M1) $$\frac{{\text{d}}}{{{\text{d}}x}}\left( {f({x^2})} \right) = 2xf'({x^2})$$    (A1) $$= 2x\ln \left( {2 + \sin ({x^2})} \right)$$    A1 [3 marks] b. $$\int_x^{{x^2}} {\ln (2 + \sin t){\text{d}}t = \int_0^{{x^2}} {\ln (2 + \sin t){\text{d}}t – \int_0^x {\ln (2 + \sin t){\text{d}}t} } }$$    (M1)(A1) $$\frac{{\text{d}}}{{{\text{d}}x}}\left( {\int_x^{{x^2}} {\ln (2 + \sin t){\text{d}}t} } \right) = 2x\ln \left( {2 + \sin ({x^2})} \right) – \ln (2 + \sin x)$$    A1 [3 marks] c. ## Examiners report Many candidates answered this question well. Many others showed no knowledge of this part of the option; candidates that recognized the Fundamental Theorem of Calculus answered this question well. In general the scores were either very low or full marks. a. Many candidates answered this question well. Many others showed no knowledge of this part of the option; candidates that recognized the Fundamental Theorem of Calculus answered this question well. In general the scores were either very low or full marks. b. Many candidates answered this question well. Many others showed no knowledge of this part of the option; candidates that recognized the Fundamental Theorem of Calculus answered this question well. In general the scores were either very low or full marks. c. ## Question Find the value of $$\int\limits_4^\infty {\frac{1}{{{x^3}}}{\text{d}}x}$$. [3] a. Illustrate graphically the inequality $$\sum\limits_{n = 5}^\infty {\frac{1}{{{n^3}}}} < \int\limits_4^\infty {\frac{1}{{{x^3}}}{\text{d}}x} < \sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}}$$. [4] b. Hence write down a lower bound for $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}}$$. [1] c. Find an upper bound for $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}}$$. [3] d. ## Markscheme $$\int\limits_4^\infty {\frac{1}{{{x^3}}}{\text{d}}x} = \mathop {{\text{lim}}}\limits_{R \to \infty } \int\limits_4^R {\frac{1}{{{x^3}}}{\text{d}}x}$$      (A1) Note: The above A1 for using a limit can be awarded at any stage. Condone the use of $$\mathop {{\text{lim}}}\limits_{x \to \infty }$$. Do not award this mark to candidates who use $$\infty$$ as the upper limit throughout. = $$\mathop {{\text{lim}}}\limits_{R \to \infty } \left[ { – \frac{1}{2}{x^{ – 2}}} \right]_4^R\left( { = \left[ { – \frac{1}{2}{x^{ – 2}}} \right]_4^\infty } \right)$$     M1 $$= \mathop {{\text{lim}}}\limits_{R \to \infty } \left( { – \frac{1}{2}\left( {{R^{ – 2}} – {4^{ – 2}}} \right)} \right)$$ $$= \frac{1}{{32}}$$     A1 [3 marks] a. A1A1A1A1 A1 for the curve A1 for rectangles starting at $$x = 4$$ A1 for at least three upper rectangles A1 for at least three lower rectangles Note: Award A0A1 for two upper rectangles and two lower rectangles. sum of areas of the lower rectangles < the area under the curve < the sum of the areas of the upper rectangles so $$\sum\limits_{n = 5}^\infty {\frac{1}{{{n^3}}}} < \int\limits_4^\infty {\frac{1}{{{x^3}}}{\text{d}}x} < \sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}}$$      AG [4 marks] b. a lower bound is $$\frac{1}{{32}}$$     A1 Note: Allow FT from part (a). [1 mark] c. METHOD 1 $$\sum\limits_{n = 5}^\infty {\frac{1}{{{n^3}}}} < \frac{1}{{32}}$$    (M1) $$\frac{1}{{64}} + \sum\limits_{n = 5}^\infty {\frac{1}{{{n^3}}}} = \frac{1}{{32}} + \frac{1}{{64}}$$     (M1) $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}} < \frac{3}{{64}}$$, an upper bound      A1 Note: Allow FT from part (a). METHOD 2 changing the lower limit in the inequality in part (b) gives $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}} < \int\limits_3^\infty {\frac{1}{{{x^3}}}{\text{d}}x} \left( { < \sum\limits_{n = 3}^\infty {\frac{1}{{{n^3}}}} } \right)$$     (A1) $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}} < \mathop {{\text{lim}}}\limits_{R \to \infty } \left[ { – \frac{1}{2}{x^{ – 2}}} \right]_3^R$$     (M1) $$\sum\limits_{n = 4}^\infty {\frac{1}{{{n^3}}}} < \frac{1}{{18}}$$, an upper bound     A1 Note: Condone candidates who do not use a limit. [3 marks] d. [N/A] a. [N/A] b. [N/A] c. [N/A] d.
{}
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Pedal Power Extra for Experts | CK-12 Foundation # 8.15: Pedal Power Extra for Experts Created by: CK-12 0  0  0 Extra for Experts: Pedal Power – Write Functions Rules from Graphs and Tables Solutions: Extra for Experts, Pedal Power: 1 Time (Number of hours) Jefferson Distance (Number of miles) Richards Distance (Number of miles) 0 0 0 1 6 0 2 12 0 3 18 18 4 24 36 5 30 54 Jefferson: $D = 6t$ Richards: $D = 18(t - 2)$ Extra for Experts, Pedal Power: 2 Time (Number of hours) Prentiss Distance (Number of miles) Jerome Distance (Number of miles) 0 0 0 1 0 8 2 0 16 3 16 24 4 32 32 5 48 40 Prentiss: $D = 16(t - 2)$ Richards: $D = 8t$ Extra for Experts: Pedal Power 1 – Write Functions Rules from Graphs and Tables Fact: Jefferson biked one-third Richards speed. Jefferson left 2 hours before Richards. Use the Fact and the graph. Complete the table for each biker showing distance traveled. Write a function to show the relationship between number of miles traveled $(D)$ and number of hours $(t)$ traveled for each biker. Extra for Experts: Pedal Power 2 – Write Functions Rules from Graphs and Tables Fact: Prentiss biked twice as fast as Jerome and left 2 hours after Jerome. Use the Fact and the graph. Complete the table for each biker showing distance traveled. Write a function to show the relationship between number of miles traveled $(D)$ and number of hours $(t)$ traveled for each biker. Feb 23, 2012 Apr 29, 2014
{}
# Eigenvalues of an imperfect circulant matrix For the circulant matrix, for example, $$\begin{bmatrix} a & b & 0 & & 0 \\ 0 & a & b & \cdots & 0 \\ 0 & 0 & a & & 0 \\ & \vdots & &\ddots & \vdots\\ b & 0 & 0 & \cdots & a \end{bmatrix},$$ we can obtain its eigenvalues analytically using the property of circulant matrix. Is it still possible to get the eigenvalues of the following "impefect circulant matrix" analytically? $$\begin{bmatrix} c & d & 0 & & 0 \\ 0 & a & b & \cdots & 0 \\ 0 & 0 & a & & 0 \\ & \vdots & &\ddots & \vdots\\ b & 0 & 0 & \cdots & a \end{bmatrix},$$ Thank you! • @RodrigodeAzevedo Sorry, I made a mistake in the original problem. I have edited the problem to correct the mistake. Thank you very much. – Jong Howe Aug 16 '18 at 14:36 • Are you only interested in exact solutions, or also in perturbation bounds assuming $c$ and $d$ are not too far from $a$ and $b$ ? – ippiki-ookami Aug 16 '18 at 14:48 • @ippiki-ookami Thank you for the comment. I am interested in exact solutions. – Jong Howe Aug 24 '18 at 11:45 $$\begin{bmatrix} c & d & 0 & \dots & 0 & 0\\ 0 & a & b & \dots & 0 & 0\\ 0 & 0 & a & \dots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & a & b\\ b & 0 & 0 & \dots & 0 & a \end{bmatrix} = \mathrm U + b \, \mathrm e_n \mathrm e_1^\top =: \mathrm M$$ where $\rm U$ is an $n \times n$ upper triangular and bidiagonal matrix. Using the matrix determinant lemma, the characteristic polynomial of matrix $\rm M$ is \begin{aligned} \det (s \mathrm I_n - \mathrm M) &= \det \big( (s \mathrm I_n - \mathrm U) - b \, \mathrm e_n \mathrm e_1^\top \big) \\ &= \det (s \mathrm I_n - \mathrm U) - b \, \mathrm e_1^\top \mbox{adj} \big( s \mathrm I_n - \mathrm U \big) \,\mathrm e_n\\ &= \det (s \mathrm I_n - \mathrm U) - b \, \mathrm e_n^\top \mbox{adj}^\top \big( s \mathrm I_n - \mathrm U \big) \,\mathrm e_1 \\ &= \det (s \mathrm I_n - \mathrm U) - b \, \mbox{cofactor}_{n,1}\big( s \mathrm I_n - \mathrm U \big) \end{aligned} Since $\rm U$ is upper triangular, $$\det (s \mathrm I_n - \mathrm U) = (s-a)^{n-1} (s-c)$$ The $(n,1)$-th cofactor is $$(-1)^{n+1} (-b)^{n-2} (-d) = (-1)^{2n} b^{n-2} d = b^{n-2} d$$ and, thus, $$\det (s \mathrm I_n - \mathrm M) = \color{blue}{(s-a)^{n-1} (s-c) - b^{n-1} d}$$ • Thanks very much. I still wonder whether it is possible to get a closed form solution like $s_i = f(a,b,c,d,n,i)$. – Jong Howe Aug 24 '18 at 11:48 • @JongHowe Aren't you operating under the assumption that it's possible to find the roots of any polynomial in terms of arithmetic operations and radicals? It has been known for almost 2 centuries that it is not possible for polynomials of degree $5$ or higher. Take a look at this. However, finding the exact characteristic polynomial is possible, which is what I did. Just forget about finding its exact roots. – Rodrigo de Azevedo Aug 24 '18 at 12:11 • Thanks for the answer. I know that it is impossible to find the root of any general polynomial in terms of arithmetic operations and radicals. But I was not sure whether it was possible for the characterisitc polynomial you gave, which has a somehowe special form, has exact solution. But maybe it is still impossible. Thank you very much. – Jong Howe Aug 27 '18 at 1:27 • @JongHowe I wouldn't use such strong words. For example, if $d=0$, then we have eigenvalues $a$ with multiplicity $n-1$ and $c$ with multiplicity $1$. However, if $d \neq 0$, the eigenvalues are not so obvious. – Rodrigo de Azevedo Aug 27 '18 at 4:41
{}
# The math behind supply curves When I took introductory microeconomics towards the end of my undergraduate I expected almost everything in the course to be nonsense. Most of the claims of the economics profession that I had encountered seemed to me to be so far from the world around me that I expected the course to be full of errors. However my initial expectations were somewhat subverted during the beginning portion of the class. Having taken many math courses I could see that many of the concepts that I was learning were simply basic calculus proofs. Having taken physics I was used to simplifying assumptions, and many of the assumptions that were made seemed like good candidates for a first approximation of reality. Midway through the course however an assumption was made that confirmed my views that most of the conclusions of the course did not apply to reality. The assumption made was not explicitly stated as an assumption, but rather stated as if it were an empirical fact. The assumption was that firms face decreasing returns to scale. This assumption is crucial to deriving supply curves as the analysis below will show. We start with the equation for the profit of a firm, where $\pi$ is profit, p is price, q is the quantity produced, and $c(q)$ is the cost to produce quantity q $\pi=pq-c(q)$ Firms are assumed to maximize profits. We can find the maximum of the above profit function using basic calculus. First take the derivative and set it equal to zero. Since we are trying to find the profit maximizing quantity we take the derivative with respect to quantity. $\frac{d\pi}{dq}=p+q\frac{dp}{dq}-\frac{dc(q)}{dq}=0$ Since we assume price taking the rate of change of a firm’s price is independent of quantity, and so $\frac{dp}{dq}=0$ This gives us the critical points of the profit function, which are when $p=\frac{dc(q)}{dq}$, which is the familiar price=marginal cost from econ 101. However the critical points could be either maxima or minima of the profit function. We can test which by using the second derivative test. If $\frac{d^2\pi}{d^2q}<0$ then we have a maximum, if it is greater than zero we have a minimum, and if it is equal to zero the test is inconclusive. Since $\frac{d^2\pi}{d^2q}=-\frac{d^2c(q)}{d^2q}$ the profit function has a maximum where $p=\frac{dc(q)}{dq}$ when $\frac{d^2c(q)}{d^2q}>0$, or when marginal cost is increasing. Otherwise $p=p=\frac{dc(q)}{dq}$ is a minimum of the profit function and the function has its maximum at a quantity of either zero or infinity. If marginal costs are constant firm behaviour depends on the price. If the price is equal to marginal cost the firm does not care how much is produced. If the price is greater than marginal cost the firm wants to sell as much as possible, and if the price is less than marginal cost the firm wants to produce a quantity of zero. This means that as long as marginal costs are increasing, or as long as it gets more expensive to produce goods the more is being produced, the usual supply curves from econ 101 apply. Firms produce an amount where price is equal to marginal cost. However, if it gets cheaper to produce a good as more is produced, then firms want to sell as much as possible or nothing at all, depending on whether the market price is less than or more than the minimum it costs them to produce a good. Drawing a supply curve implies that costs are increasing as more of a good is produced. While it might seem odd to say that a firm wants to produce an infinite amount at a given price, in practice all we are saying is that a firm wants to produce as much as possible. Some graduate textbooks, for example Macroeconomic Theory, by Mass-Cowell and Whinston, use the fact that firms want to produce as much as possible to say that price taking makes no sense for firms without increasing costs. However there is no problem for the theory, all that happens is that firms in that situation are limited by demand. Readers who are familiar with Keynesian macroeconomic ideas will have an inkling of how this could be important to macroeconomics. I will explore this more in later posts. Despite the simplicity of the above derivation many economists seem to forget the limited situations in which supply curves exist when teaching and thinking about economics. Most students of economics also do not come away with an understanding of the extremely limited situations in which the economics 101 material they are learning applies. In fact, as I hope to show in future posts, assuming that cost functions are increasing is a problem even for papers that are currently being published, for example many macroeconomics papers assume increasing marginal costs. ## 5 thoughts on “The math behind supply curves” 1. Arilando says: Where does the +q dp/dq come from in the first derivative? Shouldn’t it simply be p – dc(q)/dq ? Like 2. Product rule. We aren’t treating p as a constant here. If it is constant, as in perfect competition, dp/dq=0 and our answer reduces to what you have above. Like
{}
# Django删除Model时自动删除文件 Note that when a model is deleted, related files are not deleted. If you need to cleanup orphaned files, you’ll need to handle it yourself (for instance, with a custom management command that can be run manually or scheduled to run periodically via e.g. cron). ## Django REST Framework class MyModelViewSet(ModelViewSet): queryset = MyModel.objects.all().order_by('-date_created') serializer_class = MyModelSerializer def perform_destroy(self, instance): instance.file.delete(save=False) instance.delete() ## 利用signals from django.db.models.signals import pre_delete
{}
# OEMMFF94InitialCharges¶ bool OEMMFF94InitialCharges(OEMolBase &mol) Assigns integral or fractional formal atomic charges (for example +1/3 on guanidinium nitrogens), which are used in MMFF94 force field for obtaining final partial atomic charges applying bond charge increments. Requires a prior call of OEMMFFAtomTypes. Charge 0.0 is assigned to any atom missing proper MMFF94 atom type. Returns true when charges are successfully assigned. Note This function assigns silently zero charges to all atoms if MMFF94 atom types are not assigned to the passed molecule. Hint We highly recommmend to assign initial MMFF charges to a molecule by calling the OEAssignPartialCharges function with the OECharges.Initial parameter.
{}
Comparing the images I have a drawable, which I want to compare to. I created a drawable object with it, and when I log it with toString, I get: android.graphics.drawable.BitmapDrawable@41f95548 I am randomly selecting a button, and I want to keep randomly selecting until the image of the image button is not android.graphics.drawable.BitmapDrawable@41f95548 Now, I want to compare using this code: do{ }while(buttonBackground = android.graphics.drawable.BitmapDrawable@41f95548 )); buttonBackground is a drawable object. This, for some reason, gets me an error: How can I compare images with a code like this android.graphics.drawable.BitmapDrawable@41f95548. What is this code called? Thanks You could make it legal by saying while(buttonBackground.toString().equals("android.graphics.drawable.BitmapDrawable@41f95548")) That's apples-to-apples. But it probably won't be doing what you intend. A better way will be to remember which index you chose, and then compare the index til you get a different one. Something like int currentIndex = <something, the current button image>... ... int newIndex = rng.nextInt(4) } while(newIndex != currentIndex); (By the way, Do you mean != or ==? Are you choosing randomly til you get a different one?) Hope that helps a little! • Hey, so for the index thing...I wish I could do that, but I cant. See, my program is getting a random button and then swapping the properties of the two buttons. I was doing the index thing here: gamedev.stackexchange.com/questions/112025/… But...it was too slow. I like your "apples to apples" method. Why did you mention it will probably not work? Thanks, I have liked, and will mark best answer once I get this! – NullPointerException Nov 29 '15 at 1:03 • Also...I meant ==. I wrote it wrong in my screen shot. I guess I should change it to .equals when trying your "apples to apples" method, right? – NullPointerException Nov 29 '15 at 1:05 • Hmm...I think I know why you were saying it wont work...That code is changing every time, which means it wont work some times? It is not constant. Why is that, isn't there only one image code? What is that code even called? Thanks so much – NullPointerException Nov 29 '15 at 1:13 • How should I compare the images now? I just don't want the image button that I pick from the array list to have that one image. – NullPointerException Nov 29 '15 at 1:14
{}
# Homework Help: Continuous topology problem 1. Jun 2, 2010 ### beetle2 Let $(X; T )$ be a topological space. Given the set Y and the function $f : X \rightarrow Y$, define $U := {H\inY \mid f^{-1}(H)\in T}$ Show that U is the finest topology on Y with respect to which f is continuous. 2. Relevant equations 3. The attempt at a solution I was wondering is this implying that $U$ is the Quotient topology? 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Jun 2, 2010 ### Dick Re: Topology No, it's not a quotient topology. You aren't identifying any points. Suppose you give Y a finer topology than U? Use the topological definition of continuity. 3. Jun 2, 2010 Re: Topology Thanks 4. Jun 2, 2010 ### Hurkyl Staff Emeritus Re: Topology Yes he is, via the equivalence relation P~Q iff f(P)=f(Q). 5. Jun 2, 2010 ### beetle2 Re: Topology So if it is a quotient topology will I need to prove the iff statement 6. Jun 2, 2010 ### beetle2 Re: Topology I mean do I have to show that if there is an equivalence relation I need to show that there is a function which is a homeomorhism. 7. Jun 2, 2010 ### Dick Re: Topology What is ~ supposed to mean?! f:R->R with the usual topology. f(x)=x^2. You are defining a topology on Y. Of course, f(1)=f(-1). How is 1~(-1)? 8. Jun 3, 2010 ### Hurkyl Staff Emeritus Re: Topology ~ is a symbol denoting an equivalence relation, which I subsequently defined. For any function f : X -> Y, Ker(f) is the set of pairs (a,b) for which f(a)=f(b). Theorem (First isomorphism theorem): Ker(f) is an equivalence relation on X. The corresponding set of equivalence classes is canonically bijective with the image of f For any surjective continuous function f:X -> Y, it's fair to ask if the canonical bijection X/Ker(f) -> Y is a homeomorphism, in which case it would be fair to say that Y has the quotient topology. 9. Jun 3, 2010 ### Dick Re: Topology Ok, so it's a quotient in some sense on XxY if I get your drift. I still think this is a bit of a distraction from the gloriously simple approach of just picking a topology strictly finer than U and then showing f is not continuous in that topology. 10. Jun 4, 2010 ### Hurkyl Staff Emeritus Re: Topology I wasn't trying to suggest an approach to the problem; I just didn't want him to be misinformed about quotient topologies. (Incidentally, for the OP, the definition Wikipedia gives of "quotient topology" is exactly the topology you wrote... with the extra condition that f is supposed to be surjective) 11. Jun 4, 2010 ### boneill3 Re: Topology Thanks for your help guys. In my textbook is says that the quotient topology is the finest. However in my tutorials when ever they talked about quotient topologies they always mentioned something about an equivalence relation. I think this question was meant to make me think about it which I did. Once again thanks 12. Jun 4, 2010 ### Dick Re: Topology I see what you mean now. Thanks for clarifying.
{}
# A Notebook Zettelkasten 2» • Very cool find! Instant Zettel material: # 202102261003 Your first notes will suck in 20 years time #zettelkasten #quality It may be inevitable that your first notes will not hold up to scrutiny 10 or 20 years from now. Cyberneticist Ross Ashby (1903--1972) left behind 25 volumes of journals that include his studies. In volume 9, he commented on the poor quality of the first volumes: > My early notes, especially Volumes 2 and 3, are appalling even for their > simple ignorance and inaccuracy. They are quite unfit for any human > eye.[#20210226ashby][] Volumes 1--3 range from 1928--1939 (ages 25--36), while volume 9 ranges from 1946--1947 (ages 43--44). [#20210226books]: "Bookshelf of W. Ross Ashby's Journals", <http://www.rossashby.info/journal/volume/index.html> [#20210226ashby]: Ross Ashby, Journals, Volume 9, page 1967, <http://www.rossashby.info/journal/page/1967.html> Author at Zettelkasten.de • https://christiantietze.de/ • @ctietze , now you have me curious what else is in your #quality tag... But yes, I made a note like this too. I'm of the opinion that cybernetics is a great topic for anyone using the Zettelkasten method, as a way to understand how knowledge ecosystems can form and flourish. Observations logged here: write.as/via-poetica • I have a #quality tag actually, most of what is there comes from Zen and the Art of Motorcycle Maintenance by Rober Pirsig • I think Andy Matuschak adds support to the idea of such a zettel when he said, "just like writing itself, it's hard to write notes that are worth developing over time."
{}
# Numerical discretization and the Schrodinger equation, for a simulation i'm solving numerically the Schrodinger time dependent equation, in this case simplified to one dimension, and i don't know at all how to discretize it, or if what i have done its okay. $$i\hbar\frac{\partial }{\partial t}\psi(x,t)=-\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2}\psi(x,t)+V(x,t)\psi(x,t)$$ We can make $$2m \rightarrow 1$$ and $$\hbar \rightarrow 1$$, so for the numerical solution i make a grid of time and space, and start with a initial state $$\psi(x,0)=\psi_0$$, and a potential discretized too $$V(x)$$. With the typical form of discretitzation of derivatives we can make the next expression. On the subscrips indicate time, and the superscrips indicate space. $$i\frac{\psi_{k+1}^i-\psi_k^i}{\Delta t}=-\frac{\psi^{i+1}_k+\psi^{i-1}_k-2\psi^i_k}{\Delta x^2}+V^i_k\psi^i_k$$ So, for make the iterations in time, $$\psi_{k+1}^i=\psi_{k}^i+i\frac{\Delta t}{\Delta x^2}\left( \psi^{i+1}_k+\psi^{i-1}_k-2\psi^i_k\right)-iV^i_k\psi^i_k\Delta t$$ I'applying this expression to my program and i dont obtain good results. I dont know if the complex variable is bad defined, or if there is some problem with the potential. Thanks! • What are the results you are calling not good? What time step size $\Delta t$ and grid size $\Delta x$ are you using? And what is the form of the potential $V$ that you are using? Dec 12 '20 at 17:58 • For space $x\in[0,1]$ in $100$ steps, and time I forced to $\frac{\Delta t}{\Delta x^2}<0.5$, because if dont, it doesnt converge. At the moment im testing setting $V=0$. And i dont pretend the potential depends on time. I test with a gaussian wave, and it doesnt behaves as it should, it divides in multiple oscilations, it doesnt displace. And i dont know at all if the argument used isn't right or is another type of problem when programming. Dec 12 '20 at 18:37 The numerical method you have chosen for the spatial discretization is a second-order central finite difference method, and you have coupled that to a Forward Euler time scheme, which gives you an explicit scheme that is first order in time and second order in space. Together, it is often called the Forward-Time, Central-Space (FTCS) scheme. This scheme is perfectly stable provided a small time step is used, $$\alpha \Delta t / \Delta x^2 \leq 0.5$$ where $$\alpha$$ is a diffusion speed. However, stability isn't the only thing required for the Schrodinger equation. The numerical method must also be unitary, such that: $$\int_{-\infty}^{\infty} |\psi|^2 dx = 1$$ for all time. I am not an expert in the Schrodinger equation, so I will summarize what these slides show for the analysis. If we lump the derivative and potential function into an operator $$H$$, the Schrodinger equation boils down to: $$i \frac{\partial \psi}{\partial t} = H \psi$$ which has the exact solution: $$\psi(x,t) = e^{-i H t} \psi(x,0)$$ For the FTCS scheme, the operator looks like: $$\psi_{k+1}^i = (1-iH\Delta t)\psi_{k}^i$$ where $$H$$ is the derivative operator plus the potential function. However, this is not unitary. The Cayley representation of the exponential operator gives a second-order accurate scheme that is also unitary, and looks like: $$e^{-i H t} \approx \frac{1 - i \frac{1}{2} H \Delta t}{1 + i\frac{1}{2} H \Delta t}$$ which gives a numerical method that looks like: $$(1 + i \frac{1}{2} H \Delta t) \psi_{k+1}^i = (1 - i \frac{1}{2} H \Delta t) \psi_{k}^i$$ This should give much better results. There are plenty of discussions on various methods over on our sister site, SciComp.SE. Okay, so a unitary method is nice and all, but is the FTCS impossible to use? Not really. In fact, it should give increasingly better answers (and increasingly unitary answers) as the grid and/or time step are refined. So it's possible that the numerical dissipation and dispersion is why you are getting bad answers. To help figure out if it is a code problem or a numerical method/non-unitary operator problem, you can try to refine the time step so it is very much smaller than is needed for stability, say $$\Delta t /\Delta x^2 = 0.005$$. Obviously the answer will take longer, but the numerical damping from the first order time scheme should go away and you'd be left with errors due to the spatial scheme only (approximately). Then you can try refining the grid as well, so that those errors decrease. If you perform those tests, the solution should converge to a good, high-quality solution. If your oscillations get worse or the solution blows up or something, then it's a sign there may be errors in the code that need to be fixed. We can't help debug that specifically though.
{}
# Effective ways of teaching regex to students who know Java My students are knowledgeable about Java but need to know something about Regular Expressions. Many students find them difficult and intimidating. The students don't need to know every detail, but do need to know enough to work effectively and learn more. What are effective ways of teaching the fundamentals of regular expressions in Java or otherwise? I'd like the students to understand how it works, and not only what each pattern does. This year it's been decided at my school (I'm a teacher assistant this year) to teach the students a bit about regular expressions. We teach in Java. The students have never faced anything like regex in the past. The students I've asked (we have a test group - we test new ideas on them for feedback. A diverse group in terms of level and background1 consisting of 3-4 students) seem to find regex very difficult (we just gave them the pattern list and explain each pattern). 1background in terms of knowledge in the field. • This is too small for an answer but start by showing them why regular expressions are worth their time to learn. I found when I was learning Regex, it was a horrid experience until I found use cases where they made my life easy enough to be worth the cost of learning. – Cort Ammon - Reinstate Monica Aug 6 '17 at 18:31 • @CortAmmon good point. – ItamarG3 Aug 6 '17 at 18:32 • Regex is great when you are creating a language-using tool, or something that processes stuff written in a (regular) language. Not much use for them otherwise. Great for Domain Specific Languages. Why do they need to learn them? Maybe they need background in language design first? – user737 Aug 6 '17 at 20:39 Many students find them difficult and intimidating. Then the challenge is to make them fun and engaging and (ideally) interactive. I myself have undertaken the challenge here and there of teaching myself regex, and two resources rose to the top in my own learning: RegexOne and Regex Crossword. RegexOne is great as a lesson-by-lesson tutorial. Students can type and see in real time what tests cases their regex does/does not match. There are 15.5 lessons and 8 practice problems. The practice problems draw from relevant use cases, such as extracting a phone number, trimming white space, or reading a log file. Regex Crossword mixes regex with logic: students have to fill in a square grid and track how gradually more complex regular expressions interact with each other. (That sounds a bit vague, but an example like this one makes it clear what I mean.) The crosswords tend to have fun themes/solutions. There are many crosswords to solve, including ones with great complexity, so students will have to spend lots of time working at it if they'd like to complete all the puzzles. When it comes to working in class, it'd be pretty easy to work with some of the smaller crosswords in a non-digital manner. You could put one on the board at the start of every class as a warm-up or print/copy some of your favorite from the site. Maybe even have students work in groups and complete to solve a series of puzzles in the fastest possible time. Between these two resources, students will at least have a pretty good start to working with regex. • In a similar way you could create a Jeopardy game. The Jeopardy "answers" are REs. The contestants are given an RE for some category and the first to "ring the bell" gets to reply with a valid match - in the form of a question, of course. I used to keep a set of hotel desk bells for this sort of thing. This only works for deciphering/reading, of course. As you proceed through a category the REs get harder. The categories can be RE types or something more elaborate. – Buffy Aug 5 '17 at 22:06 • A similar activity to RegexOne, without the tutorial text but with larger test suites for each problem, is alf.nu/RegexGolf – Peter Taylor Aug 10 '17 at 7:30 In Java: Don't In programming language X: Don't Treat Regular Expression as its own language. Teach regex in a language-agnostic fashion using tools that process regular expressions rather than trying to teach regular expressions and their implementation in Java, or any other language. Each language has its own, idiosyncratic, engine and method of use. Different engines implement subsets of the entirety of the regular expression specifications. As a starting point, use an online version of a regular expression editor. There are two fairly good versions that I'm aware of. First is Regex101, which has a decent IDE style interface, including an live explanation of the current expression, and a token list to choose from. The one feature I like is that it allows you to "Save & Share" the expression, which adds it to a library of expressions available to others. Of course, that library is also available to you, so you can try to find one that's already written to do what you need. One major issue I have with this site is that it still does not implement the entire specification of regular expressions. Specifically, it does not handle If-Then-Else conditionals: (?(subpattern)yes|no) and (?(group)yes|no). A second site, which will handle the If-Then-Else conditionals, is Regex Storm. This one produces a nice set of tables for the "output" of the expression, but lacks the collection of tokens, and library of expressions of Regex101. As near as I have been able to determine, this editor implements the entire set of regular expressions available in all engines extant, including left-to-right text. There is also an extensive reference on this site that can be terse at times, but does include examples for most of the complex, or potentially confusing, constructs. After selecting the editor to use, local, online, etc., the problem now becomes how to work through the "language" of regex. I don't like to reinvent the wheel. Therefore, the best option I can think of is to follow the outline of the regex tutorial built in to Perl, "perlretut", which should be available from any complete Perl install by typing perdoc perlretut on the command line. Perl.org's documentation also has it online in HTML for all to use. Granted, all the samples there are given in the context of use within Perl, so they need to be converted for your classroom use. The idea isn't to, necessarily, use their examples, as to follow the sequence they have created. They've invented the wheel, now you get to design the rims and hub caps so that the vehicle you need is "road ready." To cover the very basic outline, below is the relevant sections from the Table of Contents for perlretut. For the "died-in-the-wool" purists, it even begins with the inexplicably obligatory "Hello World" of program language instruction. • Part 1: The basics • Simple word matching • Using character classes • Matching this or that • Grouping things and hierarchical matching • Extracting matches • Backreferences • Relative backreferences • Named backreferences • Alternative capture group numbering • Position information • Non-capturing groupings • Matching repetitions • Possessive quantifiers • Building a regexp • Using regular expressions in Perl • Part 2: Power tools • More on characters, strings, and character classes • Compiling and saving regular expressions • Composing regular expressions at runtime • Embedding comments and modifiers in a regular expression • Looking ahead and looking behind • Using independent subexpressions to prevent backtracking • Conditional expressions • Defining named patterns • Recursive patterns • A bit of magic: executing Perl code in a regular expression • Backtracking control verbs • Pragmas and debugging One additional resource which I have found helpful from time to time is Regular-Expressions.info. They include a reference and a tutorial, among other things. The significance of this site is that it often has a different way of "saying" the same thing as other sites, which can reach a student when the standard explanation just does not "click" for the student. • Most importantly: call them regex or regexp, not regular expressions, or the person who later teaches them the Chomsky hierarchy will curse you from the depths of their heart. – Peter Taylor Aug 10 '17 at 7:23 First and foremost, regular expression is a mathematical concept. Lacking the relevant background and intuition does make it intimidating indeed. Students must have at least a vague understanding of what FA is, and what makes it F. IMO, understanding the abilities and limitations of FAs is the key. I found that conducting a creation of an regexp interpreter is an eye opener. An ultimate goal of the exercise is to make a grep clone. Start simple, from strcmp. Introduce a dot. Spend some time explaining what makes it a meta character, and how to tell a meta from a normal character. Introduce a star. Say few words on Kleene theorem, and on how this innocent star is really a centerpiece of a profound result. At this point students shall have enough intuition to grasp the rest of the syntactic sugar. • I dont even know what you mean by FA or that being F, yet I'm pretty good with regex. Similarly, I don't understand why you think math is needed to use, or understand, them. They're pattern matching, right? Not formula matching. – Gypsy Spellweaver Aug 6 '17 at 0:18 • @GypsySpellweaver FA is finite automaton, where F stands for finite. – user58697 Aug 6 '17 at 0:24 • So, how is knowing any of that important to using regex, in programming or CLI? In implementing a regex interpreter, that might be important, can't say, but that's not part of the question from the OP. – Gypsy Spellweaver Aug 6 '17 at 0:33 • @GypsySpellweaver OP didn't ask of using regex; the question is explicitly about fundamentals and understanding how it works. You cannot teach fundamentals without resorting to fundamentals. – user58697 Aug 6 '17 at 0:38 • @GypsySpellweaver I agree on the misreading. OTOH I have seen enough senior programmers trying to parse stack languages with regexes. I am sure you did too. I do want my students to know what they are doing. – user58697 Aug 6 '17 at 0:53 It is important to know that there are two languages involved, each with its own rules and metacharacters. Characterclassese is the language of character classes, which are character wildcards. Its special characters include [ ] ^-. Every character class stands for a single character. Literal characters are themselves. Regexese is a second language. Its atoms are character classes. Here are some of its rules • Juxaposition means "and then immediately." • It has an operator for or which is | • Parentheses are delimiters and can override the order of operations • There are several postfix multiplicity operators which have higher precedence than juxtaposition. • Special character classes include .^\$ These are metacharacters in the language as well. It is important to point out the existence of the two layers and how they work together. "Hedwig," the O'Reilly book on Regexes is great.
{}
F_1_panel.stripplot {lattice} R Documentation ## Default Panel Function for stripplot ### Description This is the default panel function for stripplot. Also see panel.superpose ### Usage panel.stripplot(x, y, jitter.data = FALSE, factor = 0.5, amount = NULL, horizontal = TRUE, groups = NULL, ..., identifier = "stripplot") ### Arguments x,y coordinates of points to be plotted jitter.data whether points should be jittered to avoid overplotting. The actual jittering is performed inside panel.xyplot, using its jitter.x or jitter.y argument (depending on the value of horizontal). factor, amount amount of jittering, see jitter horizontal logical. If FALSE, the plot is ‘transposed’ in the sense that the behaviours of x and y are switched. x is now the ‘factor’. Interpretation of other arguments change accordingly. See documentation of bwplot for a fuller explanation. groups optional grouping variable ... additional arguments, passed on to panel.xyplot identifier A character string that is prepended to the names of grobs that are created by this panel function. ### Details Creates stripplot (one dimensional scatterplot) of x for each level of y (or vice versa, depending on the value of horizontal) ### Author(s) Deepayan Sarkar Deepayan.Sarkar@R-project.org stripplot, jitter
{}
## mathmath333 one year ago How many 4 digit numbers are there whose decimal notation contains not more than 2 distinct digits ? 1. mathmath333 \large \color{black}{\begin{align} & \normalsize \text{ How many 4 digit numbers are there whose decimal notation}\hspace{.33em}\\~\\ & \normalsize \text{ contains not more than 2 distinct digits ? }\hspace{.33em}\\~\\ & a.)\ 672 \hspace{.33em}\\~\\ & b.)\ 576 \hspace{.33em}\\~\\ & c.)\ 360 \hspace{.33em}\\~\\ & d.)\ 448 \hspace{.33em}\\~\\ \end{align}} 2. Drigobri 3. MrNood how do we know you are right? a straight answer like this is no use to anyone please explain how you derived it, or better still give the poster the method , and let HIM derive it 4. ParthKohli Note to self: No silly mistakes. No silly mistakes. No silly mistakes. 5. imqwerty well what do u mean by decimal notation here :/ sry u posted one more question like decimal notation i got stuck there too 6. imqwerty sry to ask such a silly ques 7. mathmath333 decimal notaion ={0, 1,2,3,4,5,6,7,8,9,} i think 8. imqwerty 9. ParthKohli If there is only one unique digit then there are 9 such numbers (1111, 2222, ...) If there are two unique digits: First, consider all numbers where zero does not appear. (i) Three of one digit and one of the other one. $$9\cdot \binom{4}{3} \cdot 8$$ (ii) Two of each digit $$9\cdot \binom{4}{2}\cdot 8$$ Now consider those numbers where zero does appear. (i) Three zeroes and one other digit. This can only be done in 9 ways (1000, 2000, ...) (ii) Two zeroes and two of the other digit. This can be done in $$9\cdot \binom{3}{2}$$ 10. ParthKohli I think "decimal notation" means nothing but base 10 integers here. 11. ParthKohli (iii) One zero and three of other digits.$3\cdot 9~ways$ 12. ParthKohli wow why am I getting 765 13. ParthKohli @ganeshie8 !!! 14. ParthKohli Can you spot any mistake? 15. mathmath333 i think u double counted something 16. ParthKohli What? 17. ParthKohli I'm actually getting 765 + 27 = 792 :P 18. ganeshie8 Slightly modified : If there is only one unique digit then there are 9 such numbers (1111, 2222, ...) If there are two unique digits: First, consider all numbers where zero does not appear. (i) Three of one digit and one of the other one. $$9\cdot \binom{4}{3} \cdot 8$$ (ii) Two of each digit $$9\cdot \dfrac{\binom{4}{2}}{\color{red}{2}}\cdot 8$$ Now consider those numbers where zero does appear. (i) Three zeroes and one other digit. This can only be done in 9 ways (1000, 2000, ...) (ii) Two zeroes and two of the other digit. This can be done in $$9\cdot \binom{3}{2}$$ (iii) One zero and three of the other digit. This can be done in $$\color{red}{9\cdot \binom{3}{1}}$$ @ParthKohli 19. ganeshie8 20. ParthKohli why would you change $$\binom{3}2$$ to $$\binom{3}1$$ and good catch
{}
# Does a compact semilocally simply connected geodesic space have the homotopy type of a CW complex? Does a compact semilocally simply connected geodesic space have the homotopy type of a compact CW complex? Actually what I'd like to know is whether the fundamental group of such a space is finitely presented. Edit: As Bruno Martelli notes, this is obviously false, but the question of whether the fundamental group is finitely presented, which is what I really want to know, is still open. - I don't understand the comment in your edit: The fundamental group of the Hawaiian earrings is known to be uncountable, hence it cannot be finitely presented, at least as a discrete group. Could you clarify what you mean? –  Theo Buehler Jan 12 '11 at 16:25 The Hawaiian earrings is not semi-locally simply-connected. –  Bruno Martelli Jan 12 '11 at 17:38 Do semilocally simply connected, geodesic, and compact imply that small balls are simply connected? The "counterexamples" I can think of aren't geodesic. If you have a locally finite cover by absolute retracts (ARs) with all intersections ARs, then Weil's theorem says the nerve is homotopy equivalent to the space. So maybe you could do a similar thing with a covering by simply-connected sets to get a compact nerve whose fundamental group surjects that of the space. (Local simple-connectivity would allow a map from the 2-skeleton of the nerve to the space.) –  Richard Kent Jan 12 '11 at 18:44 The cone over the Hawaiian earrings is semi-locally simply-connected and also geodesic, right? This space is not locally simply-connected. –  Bruno Martelli Jan 12 '11 at 18:56 @Bruno, I think the cone over the Hawaiian earring is not geodesic, because there are points that are very close together (at height 1/2 in the cone) that are not joined by a short geodesic. (I could be wrong, I'm easily confused thinking about weird spaces.) –  Richard Kent Jan 12 '11 at 19:01 The bouquet of infinitely many shrinking 2-spheres in $\mathbb R^3$ centered in $(0,0,n)$ and of radius $n$ is a compact simply connected geodesic space, which is not homotopic to a compact CW complex. - That was way too easy. Thanks! –  Jim Conant Jan 12 '11 at 13:35 I hope I have not goofed, but I think the answer to your modified question is yes: The fundamental group of a semi-locally simply connected, compact and geodesic space is finitely presented. Here are the ingredients - all numbers in parentheses refer to Bridson-Haefliger, Metric spaces of non-positive curvature, Springer Grundlehren, 1999, Part I. • The universal covering space $\widetilde{X}$ equipped with the length metric induced by the covering projection is a length space (3.25). • The fundamental group $\pi_{1}(X)$ acts on $\widetilde{X}$ properly and cocompactly by isometries (8.3 (2)), see also (8.5). • If a length space $\widetilde{X}$ admits a proper and cocompact action by isometries then it is locally compact (8.4 (1)) and hence proper and geodesic (3.7). • A group is finitely presented if and only if it acts properly and cocompactly by isometries on a simply connected geodesic space (8.11). All this taken together yields that $\widetilde{X}$ is a simply connected geodesic metric space and $\pi_{1}{(X)}$ acts properly and cocompactly by isometries, hence $\pi_{1}{(X)}$ must be finitely presented. - I'd mark this answer correct too if I could! –  Jim Conant Jan 12 '11 at 19:52
{}
1. The increase in pressure required to decrease the 200 L volume of a liquid by 0.004% (in kPa) is (Bulk modulus of the liquid ==2100 MPa) Maharashtra CET 2006 Mechanical Properties Of Solids AFMC 2010 Atoms 3. To draw maximum current from a combination of cells, how should the cells be grouped? AFMC 2006 Current Electricity NEET 2019 Waves 5. The period of revolution of planet A around the sun is 8 times that of B. The distance of A from the sun is how many times greater than that of B from the sun? AIPMT 1997 Gravitation 6. Two discs of same moment of inertia rotating about their regular axis passing through centre and perpendicular to the plane of disc with angular velocities $\omega_1$ and $\omega_2$ . They are brought into contact face to face coinciding the axis of rotation. The expression for loss of energy during this process is:- NEET 2017 System Of Particles And Rotational Motion 7. The temperature of 100 g of water is to be raised from 24°C to 90°C by adding steam to it. The mass of the steam required for this purpose is JIPMER 2015 Thermal Properties Of Matter 8. The diameter of a flywheel is increased by 1%. Increase in its moment of inertia about the central axis is JIPMER 2015 System Of Particles And Rotational Motion 9. A body of mass 4 kg is accelerated upon by a constant force, travels a distance of 5 m in the first second and a distance of 2 m in the third second. The force acting on the body is KCET 2008 Laws Of Motion 10. The spectrum of an oil flame is an example for ........... KCET 2010 Dual Nature Of Radiation And Matter 1. An acidic buffer solution can be prepared by mixing the solution of IIT JEE 1981 Equilibrium 2. A mixture of benzaldehyde and formaldehyde on heating with aqueous NaOH solution gives IIT JEE 2001 Aldehydes Ketones and Carboxylic Acids 3. Which of the following solutions will have pH close to 1.0 ? IIT JEE 1992 Equilibrium 4. Acetone is mixed with bleaching powder to give AFMC 2004 Aldehydes Ketones and Carboxylic Acids 5. 75% of a first order reaction was completed in 32 minutes. When was 50% of the reaction completed COMEDK 2010 Chemical Kinetics 7. The radii of $Na^+$ and $Cl^{-1}$ ions are 95 pm and 181 pm respectively. The edge length of NaCl unit cell is KCET 2006 The Solid State 8. Acidic hydrogen is present in IIT JEE 1985 Hydrocarbons 9. X is heated with soda lime and gives ethane. X is AFMC 2005 Aldehydes Ketones and Carboxylic Acids 10. The oxidation state of Cr in CrO6 is : NEET 2019 Redox Reactions AIEEE 2002 Sets 2. Let $\omega \ne 1$ be a cube root of unity and S be the set of all non-singular matrices of the form $\begin {bmatrix} 1 & a & b \\ \omega & 1 & c \\ \omega^2 & \omega & 1 \end {bmatrix}$ where each of a , b and c is either $\omega \ or \ omega^2$ Then, the number of distinct matrices in the set S is IIT JEE 2011 Determinants 3. For $r = 0, 1, ... , 10,$ if $A_r,B_r$ and $C_r$ denote respectively the coefficient of $x^r$ in the expansions of $(1 + x)^{10}, (1 + x)^{20}$ and $(1 + x)^{30}$. Then, $\displaystyle \sum A_r(B_{10}B_r-C_{10}A_r)$ is equal to IIT JEE 2010 Binomial Theorem 4. The centre of the circle passing through the point (0, 1)and touching the curve $y = x^2 at (2,4)$ is IIT JEE 1983 Conic Sections 5. Perpendicular are drawn from points on the line $\frac{x+2}{2}=\frac{y+1}{-1}= \frac{z}{3}$ to the plane $x+ y + z = 3$. The feet of perpendiculars lie on the line JEE Advanced 2013 Introduction to Three Dimensional Geometry 6. A ray of light along $x+\sqrt 3 \, y=\sqrt 3$ gets reflected upon reaching X-axis, the equation of the reflected ray is JEE Main 2013 Straight Lines 7. A ray of light along $x+\sqrt 3 \, y=\sqrt 3$ gets reflected upon reaching X-axis, the equation of the reflected ray is JEE Main 2013 Straight Lines 8. A ray of light along $x+\sqrt 3 \, y=\sqrt 3$ gets reflected upon reaching X-axis, the equation of the reflected ray is JEE Main 2013 Straight Lines 9. Let AB be a chord of the circle $x^2 + y^2 = r^2$ subtending a right angle at the centre. Then, the locus of the centroid of the $\Delta PAB$ as P moves on the circle, is IIT JEE 2001 Conic Sections 1. Fluoride pollution mainly affects AIPMT 2003 Environmental Issues 2. The edible part of the fruit of apple is KCET 2013 Morphology of Flowering Plants 3. Edible part of banana AIPMT 2001 Morphology of Flowering Plants 4. Venous heart is found in Odisha JEE 2012 Animal Kingdom 5. The endosperm of gymnosperm is AIPMT 1999 Sexual Reproduction in Flowering Plants 6. The largest herbarium in India is located in Odisha JEE 2009 Diversity In The Living World 7. Which one holds corona radiata ? Maharashtra CET 2007 Human Reproduction 8. Meiosis in Dryopteris takes place during JIPMER 2010 Plant Kingdom 9. Root cap is absent in Haryana PMT 2007 Morphology of Flowering Plants 10. DNA is composed of repeating units of AIPMT 1991 Biomolecules
{}
# Contains Duplicate II in C++ C++Server Side ProgrammingProgramming Suppose we have an array and an integer k, we have to check whether there are two distinct indices i and j in the array such that nums[i] = nums[j] and the absolute difference between i and j is at most k. So, if the input is like [1,2,4,1] and k = 3, then the output will be True To solve this, we will follow these steps − • Define an array nn of pairs • for initialize i := 0, when i − size of nums, update (increase i by 1), do − • insert {nums[i], i} at the end of nn • sort the array nn • for initialize i := 1, when i < size of nn, update (increase i by 1), do − • if first element of nn[i] is same as first element of nn[i - 1] and |second of nn[i] - second of nn[i - 1]|, then − • return true • return false ## Example Let us see the following implementation to get a better understanding − Live Demo #include <bits/stdc++.h> using namespace std; class Solution { public: bool containsNearbyDuplicate(vector<int>& nums, int k) { vector<pair<int, int> > nn; for (<) { nn.push_back(make_pair(nums[i], i)); } sort(nn.begin(), nn.end()); for (int i = 1; i < nn.size(); i++) { if (nn[i].first == nn[i - 1].first and abs(nn[i].second - nn[i - 1].second) <= k) return true; } return false; } }; main(){ Solution ob; vector<int> v = {1,2,4,1}; cout << (ob.containsNearbyDuplicate(v, 3)); } ## Input {1,2,4,1} ## Output 1 Published on 10-Jun-2020 15:54:34
{}
# quartz禁止一个工作任务还没执行完,下一个工作(同jobkey)就开始执行的方法 @DisallowConcurrentExecution public class IncrCrawlJob implements Job { /** logger */ note: • 该时间段应该执行几个任务还是会执行几个任务,即使上一个任务执行完毕后已经超过该时间段 @DisallowConcurrentExecution is an annotation that can be added to the Job class that tells Quartz not to execute multiple instances of a given job definition (that refers to the given job class) concurrently. Notice the wording there, as it was chosen very carefully. In the example from the previous section, if “SalesReportJob” has this annotation, than only one instance of “SalesReportForJoe” can execute at a given time, but it can execute concurrently with an instance of “SalesReportForMike”. The constraint is based upon an instance definition (JobDetail), not on instances of the job class. However, it was decided (during the design of Quartz) to have the annotation carried on the class itself, because it does often make a difference to how the class is coded.
{}
## LaTeX forum ⇒ LyX ⇒ Wrong page numbers in ToC of unnumbered sections Topic is solved Information and discussion about LyX, a WYSIWYM editor, available for Linux, Windows and Mac OS X systems. sascq Posts: 2 Joined: Wed Aug 21, 2019 10:50 pm ### Wrong page numbers in ToC of unnumbered sections Hi all, I am a LyX user and wanted to list an unnumbered chapter and sections to the table of content, document class "Book". So I added this in the LaTeX Preamble: `\addcontentsline{toc}{chapter}{Title}\addcontentsline{toc}{section}{Title}` In the PDF everything is ok, except for the fact that in the ToC the chapter and all following sections are listed under page number 1. How can I add (the correct) page numbers? It should be 7, 13, 16 etc. instead of 1, 1, 1 ... Stefan Kottwitz Posts: 9521 Joined: Mon Mar 10, 2008 9:44 pm Hi sascq, welcome to the forum! Put such commands as TeX code (ERT) into the document at the beginning of the corresponding chapter or section. Stefan
{}
JEDNOSTKA NAUKOWA KATEGORII A+ # Wydawnictwa / Czasopisma IMPAN / Studia Mathematica / Wszystkie zeszyty ## Differentiation of Banach-space-valued additive processes ### Tom 147 / 2001 Studia Mathematica 147 (2001), 131-153 MSC: 47A35, 47D03, 46E30, 46E40. DOI: 10.4064/sm147-2-3 #### Streszczenie Let $X$ be a Banach space and $({\mit \Omega } ,{\mit \Sigma } ,\mu )$ be a $\sigma$-finite measure space. Let $L$ be a Banach space of $X$-valued strongly measurable functions on $({\mit \Omega } ,{\mit \Sigma } ,\mu )$. We consider a strongly continuous $d$-dimensional semigroup $T=\{ T(u):u=(u_{1},\mathinner {\ldotp \ldotp \ldotp },u_{d}),\ u_{i}>0$, $1\leq i\leq d\}$ of linear contractions on $L$. We assume that each $T(u)$ has, in a sense, a contraction majorant and that the strong limit $T(0)=\hbox {strong-lim}_{u\rightarrow 0}T(u)$ exists. Then we prove, under some suitable norm conditions on the Banach space $L$, that a differentiation theorem holds for $d$-dimensional bounded processes in $L$ which are additive with respect to the semigroup $T$. This generalizes a differentiation theorem obtained previously by the author under the assumption that $L$ is an $X$-valued $L_{p}$-space, with $1\leq p<\infty$. #### Autorzy • Ryotaro SatoDepartment of Mathematics Faculty of Science Okayama University Okayama, 700-8530 Japan e-mail ## Przeszukaj wydawnictwa IMPAN Zbyt krótkie zapytanie. Wpisz co najmniej 4 znaki. Odśwież obrazek
{}
Project Euler 22: Names scores using dictionary I solved Project Euler 22 using Python 3. The problem reads as follows: Using names.txt (right click and 'Save Link/Target As...'), a 46K text file containing over five-thousand first names, begin by sorting it into alphabetical order. Then working out the alphabetical value for each name, multiply this value by its alphabetical position in the list to obtain a name score. For example, when the list is sorted into alphabetical order, COLIN, which is worth 3 + 15 + 12 + 9 + 14 = 53, is the 938th name in the list. So, COLIN would obtain a score of 938 × 53 = 49714. What is the total of all the name scores in the file? As you can see, the problem has much to do with handling lists of items and not so much with mathematics, unlike most other Euler problems. I chose to break the list of names, with the index/rank of each name as the key. Are there ways to improve its performance that you can see? I'm also interested in any other possible improvements in style, naming, PEP8, etc. ''' Project Euler 22: Names scores https://projecteuler.net/problem=22 ''' import string from typing import List, Dict def get_names_from_file(file_path: str) -> List: ''' Parses a text file containing names in this format: "FOO","BAR","BAZ" and returns a list of strings, e.g., ["FOO","BAR","BAZ"] ''' with open(file_path, "r") as file_: def get_sorted_indexed_dict(items: List, is_0_indexed: bool = False) -> Dict: ''' Takes a list of strings and returns an alphabetically sorted dict with the item's 1-indexed (by default) position as key, e.g., list arg: [ "FOO", "BAR", "BAZ" ] returns: { 1 : "BAR", 2 : "BAZ", 3 : "FOO" } The return dict can be 0-indexed by adding a 2nd argument as True. ''' ix_start = 1 if is_0_indexed: ix_start = 0 items = sorted(items) numbered_dict = {} for item in items: numbered_dict[items.index(item) + ix_start] = item return numbered_dict def get_alpha_values() -> Dict: ''' Assigns ASCII chars A-Z, inclusive, to a number value 1-26, respectively. ''' alpha_values = {} index = 1 letters = list(string.ascii_uppercase) for letter in letters: alpha_values[letter] = index index += 1 return alpha_values def get_word_alpha_value(word: str, alpha_values: Dict) -> int: ''' Calculates the value of each letter in the word and returns the sum of the letter values. ''' word_value = 0 for letter in list(word): word_value += alpha_values[letter] return word_value def get_word_ranked_value(rank: int, word: str, alpha_values: Dict) -> int: ''' Calculates the ranked value according to problem 22, i.e., the value of each word's letters multiplied by its alphabetical rank. ''' return rank * get_word_alpha_value(word, alpha_values) def get_pe22_solution(file_path: str) -> int: ''' Returns the solution to Project Euler 22 based on the file path provided. ''' alpha_values = get_alpha_values() # sanity check some alpha values assert alpha_values['A'] == 1 assert alpha_values['Z'] == 26 assert alpha_values['S'] == 19 names = get_names_from_file(file_path) names = get_sorted_indexed_dict(names) # sanity checks name order based on PE 22 problem description assert names[1] == "AARON" assert names[938] == "COLIN" assert names[5163] == "ZULMA" name_values = [ get_word_ranked_value(k, v, alpha_values) for (k, v) in names.items() ] return sum(name_values) if __name__ == "__main__": FILE = "names.txt" print("The answer to PE 22 is: {}".format(get_pe22_solution(FILE))) Nice PEP8 compliant code, Phrancis. Good job! I only have two comments to make regarding that: you usually have to use triple-double quoted strings when it comes to docstrings. From the docs (PEP257): For consistency, always use """triple double quotes""" around docstrings. Use r"""raw triple double quotes""" if you use any backslashes in your docstrings. For Unicode docstrings, use u"""Unicode triple-quoted strings""" . More, you missed one newline between your imports and the first function. It's not such a big deal but I thought it's worth mentioning it. In this line: ... You can use single quotes to get rid of the escape character \: return file_.read().replace('"', '').split(',') You can also omit the read file mode as that's the default mode when you read a file: with open(file_path) as file_: ... There's more than one place where you wrote: something = list(string) Instead of the situation where you sorted the list, you don't need list(string) because you can already iterate a string just fine. Use enumerate() more often. The enumerate() function adds a counter to an iterable. That said, you can rewrite your get_alpha_values() function like this: def get_alpha_values() -> Dict: alpha_values = {} for index, letter in enumerate(string.ascii_uppercase, 1): alpha_values[letter] = index return alpha_values Enumerate takes a second argument which lets you start from whatever position you want (in our case, that would be 1). We can also make use of the get() method of the dicts and make your get_word_alpha_value() more straightforward: def get_word_alpha_value(word: str, alpha_values: Dict) -> int: return sum([alpha_values.get(letter, 0) for letter in word]) The get method of a dict (like for example characters) works just like indexing the dict, except that, if the key is missing, instead of raising a KeyError it returns the default value (if you call .get with just one argument, the key, the default value is None)
{}
# Equations of state¶ ## Base class¶ class burnman.eos.EquationOfState[source] Bases: object This class defines the interface for an equation of state that a mineral uses to determine its properties at a given $$P, T$$. In order define a new equation of state, you should define these functions. All functions should accept and return values in SI units. In general these functions are functions of pressure, temperature, and volume, as well as a “params” object, which is a Python dictionary that stores the material parameters of the mineral, such as reference volume, Debye temperature, reference moduli, etc. The functions for volume and density are just functions of temperature, pressure, and “params”; after all, it does not make sense for them to be functions of volume or density. volume(pressure, temperature, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns volumefloat Molar volume of the mineral. $$[m^3]$$ pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] density(volume, params)[source] Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns gammafloat Grueneisen parameter of the mineral. $$[unitless]$$ isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns K_Tfloat Isothermal bulk modulus of the mineral. $$[Pa]$$ Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns K_Sfloat Adiabatic bulk modulus of the mineral. $$[Pa]$$ shear_modulus(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Gfloat Shear modulus of the mineral. $$[Pa]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns C_Vfloat Heat capacity at constant volume of the mineral. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns C_Pfloat Heat capacity at constant pressure of the mineral. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns alphafloat Thermal expansivity of the mineral. $$[1/K]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Gfloat Gibbs free energy of the mineral helmholtz_free_energy(pressure, temperature, volume, params)[source] Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral molar_internal_energy(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral validate_parameters(params)[source] The params object is just a dictionary associating mineral physics parameters for the equation of state. Different equation of states can have different parameters, and the parameters may have ranges of validity. The intent of this function is twofold. First, it can check for the existence of the parameters that the equation of state needs, and second, it can check whether the parameters have reasonable values. Unreasonable values will frequently be due to unit issues (e.g., supplying bulk moduli in GPa instead of Pa). In the base class this function does nothing, and an equation of state is not required to implement it. This function will not return anything, though it may raise warnings or errors. Parameters paramsdictionary Dictionary containing material parameters required by the equation of state. ## Murnaghan¶ class burnman.eos.Murnaghan[source] Base class for the isothermal Murnaghan equation of state, as described in [Mur44]. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ Currently not included in the Murnghan EOS, so omitted. entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Birch-Murnaghan¶ ### Base class¶ class burnman.eos.birch_murnaghan.BirchMurnaghanBase[source] Base class for the isothermal Birch Murnaghan equation of state. This is third order in strain, and has no temperature dependence. However, the shear modulus is sometimes fit to a second order function, so if this is the case, you should use that. For more see burnman.birch_murnaghan.BM2 and burnman.birch_murnaghan.BM3. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ### BM2¶ class burnman.eos.BM2[source] Third order Birch Murnaghan isothermal equation of state. This uses the second order expansion for shear modulus. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral entropy(pressure, temperature, volume, params) Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ grueneisen_parameter(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. molar_heat_capacity_p(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ pressure(temperature, volume, params) Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] shear_modulus(pressure, temperature, volume, params) Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ thermal_expansivity(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. ### BM3¶ class burnman.eos.BM3[source] Third order Birch Murnaghan isothermal equation of state. This uses the third order expansion for shear modulus. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral entropy(pressure, temperature, volume, params) Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ grueneisen_parameter(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. molar_heat_capacity_p(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ pressure(temperature, volume, params) Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] shear_modulus(pressure, temperature, volume, params) Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ thermal_expansivity(pressure, temperature, volume, params) Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. ### BM4¶ class burnman.eos.BM4[source] Base class for the isothermal Birch Murnaghan equation of state. This is fourth order in strain, and has no temperature dependence. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Vinet¶ class burnman.eos.Vinet[source] Base class for the isothermal Vinet equation of state. References for this equation of state are [VFSR86] and [VSFR87]. This equation of state actually predates Vinet by 55 years , and was investigated further by . volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ Currently not included in the Vinet EOS, so omitted. entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Morse Potential¶ class burnman.eos.Morse[source] Class for the isothermal Morse Potential equation of state detailed in . This equation of state has no temperature dependence. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Reciprocal K-prime¶ class burnman.eos.RKprime[source] Class for the isothermal reciprocal K-prime equation of state detailed in [SD04]. This equation of state is a development of work by [Kea54] and [SD00], making use of the fact that $$K'$$ typically varies smoothly as a function of $$P/K$$, and is thermodynamically required to exceed 5/3 at infinite pressure. It is worth noting that this equation of state rapidly becomes unstable at negative pressures, so should not be trusted to provide a good HT-LP equation of state using a thermal pressure formulation. The negative root of $$dP/dK$$ can be found at $$K/P = K'_{\infty} - K'_0$$, which corresponds to a bulk modulus of $$K = K_0 ( 1 - K'_{\infty}/K'_0 )^{K'_0/K'_{\infty}}$$ and a volume of $$V = V_0 ( K'_0 / (K'_0 - K'_{\infty}) )^{K'_0/{K'}^2_{\infty}} \exp{(-1/K'_{\infty})}$$. This equation of state has no temperature dependence. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Returns pressure $$[Pa]$$ as a function of volume $$[m^3]$$. isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns adiabatic bulk modulus $$K_s$$ of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus $$G$$ of the mineral. $$[Pa]$$ entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters. The value for $$K'_{\infty}$$ is thermodynamically bounded between 5/3 and $$K'_0$$ [SD04]. density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Stixrude and Lithgow-Bertelloni Formulation¶ ### Base class¶ class burnman.eos.slb.SLBBase[source] Base class for the finite strain-Mie-Grueneiesen-Debye equation of state detailed in [SLB05]. For the most part the equations are all third order in strain, but see further the burnman.slb.SLB2 and burnman.slb.SLB3 classes. volume_dependent_q(x, params)[source] Finite strain approximation for $$q$$, the isotropic volume strain derivative of the grueneisen parameter. volume(pressure, temperature, params)[source] Returns molar volume. $$[m^3]$$ pressure(temperature, volume, params)[source] Returns the pressure of the mineral at a given temperature and volume [Pa] grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter $$[unitless]$$ isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$[Pa]$$ Returns adiabatic bulk modulus. $$[Pa]$$ shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus. $$[Pa]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity. $$[1/K]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy at the pressure and temperature of the mineral [J/mol] helmholtz_free_energy(pressure, temperature, volume, params)[source] Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ ### SLB2¶ class burnman.eos.SLB2[source] Bases: SLBBase SLB equation of state with second order finite strain expansion for the shear modulus. In general, this should not be used, but sometimes shear modulus data is fit to a second order equation of state. In that case, you should use this. The moral is, be careful! Returns adiabatic bulk modulus. $$[Pa]$$ density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Returns the enthalpy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params) Returns the entropy at the pressure and temperature of the mineral [J/K/mol] gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] grueneisen_parameter(pressure, temperature, volume, params) Returns grueneisen parameter $$[unitless]$$ helmholtz_free_energy(pressure, temperature, volume, params) Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus $$[Pa]$$ molar_heat_capacity_p(pressure, temperature, volume, params) Returns heat capacity at constant pressure. $$[J/K/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params) Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy at the pressure and temperature of the mineral [J/mol] pressure(temperature, volume, params) Returns the pressure of the mineral at a given temperature and volume [Pa] shear_modulus(pressure, temperature, volume, params) Returns shear modulus. $$[Pa]$$ thermal_expansivity(pressure, temperature, volume, params) Returns thermal expansivity. $$[1/K]$$ validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns molar volume. $$[m^3]$$ volume_dependent_q(x, params) Finite strain approximation for $$q$$, the isotropic volume strain derivative of the grueneisen parameter. ### SLB3¶ class burnman.eos.SLB3[source] Bases: SLBBase SLB equation of state with third order finite strain expansion for the shear modulus (this should be preferred, as it is more thermodynamically consistent.) Returns adiabatic bulk modulus. $$[Pa]$$ density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Returns the enthalpy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params) Returns the entropy at the pressure and temperature of the mineral [J/K/mol] gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] grueneisen_parameter(pressure, temperature, volume, params) Returns grueneisen parameter $$[unitless]$$ helmholtz_free_energy(pressure, temperature, volume, params) Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus $$[Pa]$$ molar_heat_capacity_p(pressure, temperature, volume, params) Returns heat capacity at constant pressure. $$[J/K/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params) Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy at the pressure and temperature of the mineral [J/mol] pressure(temperature, volume, params) Returns the pressure of the mineral at a given temperature and volume [Pa] shear_modulus(pressure, temperature, volume, params) Returns shear modulus. $$[Pa]$$ thermal_expansivity(pressure, temperature, volume, params) Returns thermal expansivity. $$[1/K]$$ validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns molar volume. $$[m^3]$$ volume_dependent_q(x, params) Finite strain approximation for $$q$$, the isotropic volume strain derivative of the grueneisen parameter. ## Mie-Grüneisen-Debye¶ ### Base class¶ class burnman.eos.mie_grueneisen_debye.MGDBase[source] Base class for a generic finite-strain Mie-Grueneisen-Debye equation of state. References for this can be found in many places, such as Shim, Duffy and Kenichi (2002) and Jackson and Rigden (1996). Here we mostly follow the appendices of Matas et al (2007). Of particular note is the thermal correction to the shear modulus, which was developed by Hama and Suito (1998). grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume (EQ B6) volume(pressure, temperature, params)[source] Returns volume [m^3] as a function of pressure [Pa] and temperature [K] EQ B7 isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B8 shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B11 molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol] thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity at the pressure, temperature, and volume [1/K] molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure at the pressure, temperature, and volume [J/K/mol] Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ D6 pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] EQ B7 gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy at the pressure and temperature of the mineral [J/mol] helmholtz_free_energy(pressure, temperature, volume, params)[source] Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ ### MGD2¶ class burnman.eos.MGD2[source] Bases: MGDBase MGD equation of state with second order finite strain expansion for the shear modulus. In general, this should not be used, but sometimes shear modulus data is fit to a second order equation of state. In that case, you should use this. The moral is, be careful! Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ D6 density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Returns the enthalpy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params) Returns the entropy at the pressure and temperature of the mineral [J/K/mol] gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] grueneisen_parameter(pressure, temperature, volume, params) Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume (EQ B6) helmholtz_free_energy(pressure, temperature, volume, params) Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B8 molar_heat_capacity_p(pressure, temperature, volume, params) Returns heat capacity at constant pressure at the pressure, temperature, and volume [J/K/mol] molar_heat_capacity_v(pressure, temperature, volume, params) Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol] molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy at the pressure and temperature of the mineral [J/mol] pressure(temperature, volume, params) Returns pressure [Pa] as a function of temperature [K] and volume[m^3] EQ B7 shear_modulus(pressure, temperature, volume, params) Returns shear modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B11 thermal_expansivity(pressure, temperature, volume, params) Returns thermal expansivity at the pressure, temperature, and volume [1/K] validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns volume [m^3] as a function of pressure [Pa] and temperature [K] EQ B7 ### MGD3¶ class burnman.eos.MGD3[source] Bases: MGDBase MGD equation of state with third order finite strain expansion for the shear modulus (this should be preferred, as it is more thermodynamically consistent. Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ D6 density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Returns the enthalpy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params) Returns the entropy at the pressure and temperature of the mineral [J/K/mol] gibbs_free_energy(pressure, temperature, volume, params) Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] grueneisen_parameter(pressure, temperature, volume, params) Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume (EQ B6) helmholtz_free_energy(pressure, temperature, volume, params) Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] isothermal_bulk_modulus(pressure, temperature, volume, params) Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B8 molar_heat_capacity_p(pressure, temperature, volume, params) Returns heat capacity at constant pressure at the pressure, temperature, and volume [J/K/mol] molar_heat_capacity_v(pressure, temperature, volume, params) Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol] molar_internal_energy(pressure, temperature, volume, params) Returns the internal energy at the pressure and temperature of the mineral [J/mol] pressure(temperature, volume, params) Returns pressure [Pa] as a function of temperature [K] and volume[m^3] EQ B7 shear_modulus(pressure, temperature, volume, params) Returns shear modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ B11 thermal_expansivity(pressure, temperature, volume, params) Returns thermal expansivity at the pressure, temperature, and volume [1/K] validate_parameters(params) Check for existence and validity of the parameters volume(pressure, temperature, params) Returns volume [m^3] as a function of pressure [Pa] and temperature [K] EQ B7 ## Modified Tait¶ class burnman.eos.MT[source] Base class for the generic modified Tait equation of state. References for this can be found in and (followed here). An instance “m” of a Mineral can be assigned this equation of state with the command m.set_method(‘mt’) (or by initialising the class with the param equation_of_state = ‘mt’). volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$K_T$$ of the mineral. $$[Pa]$$. Since this equation of state does not contain temperature effects, simply return a very large number. $$[Pa]$$ shear_modulus(pressure, temperature, volume, params)[source] Not implemented in the Modified Tait EoS. $$[Pa]$$ Returns 0. Could potentially apply a fixed Poissons ratio as a rough estimate. entropy(pressure, temperature, volume, params)[source] Returns the molar entropy $$\mathcal{S}$$ of the mineral. $$[J/K/mol]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy $$\mathcal{E}$$ of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy $$\mathcal{G}$$ of the mineral. $$[J/mol]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return a very large number. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[1/K]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Since this equation of state does not contain temperature effects, simply return zero. $$[unitless]$$ validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral ## Holland and Powell Formulations¶ ### HP_TMT (2011 solid formulation)¶ class burnman.eos.HP_TMT[source] Base class for the thermal equation of state based on the generic modified Tait equation of state (class MT), as described in . An instance “m” of a Mineral can be assigned this equation of state with the command m.set_method(‘hp_tmt’) (or by initialising the class with the param equation_of_state = ‘hp_tmt’ volume(pressure, temperature, params)[source] Returns volume [m^3] as a function of pressure [Pa] and temperature [K] EQ 12 pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] EQ B7 grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume. isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ 13+2 shear_modulus(pressure, temperature, volume, params)[source] Not implemented. Returns 0. Could potentially apply a fixed Poissons ratio as a rough estimate. molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol]. thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity at the pressure, temperature, and volume [1/K]. This function replaces -Pth in EQ 13+1 with P-Pth for non-ambient temperature molar_heat_capacity_p0(temperature, params)[source] Returns heat capacity at ambient pressure as a function of temperature [J/K/mol]. Cp = a + bT + cT^-2 + dT^-0.5 in . molar_heat_capacity_p_einstein(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure at the pressure, temperature, and volume, using the C_v and Einstein model [J/K/mol] WARNING: Only for comparison with internally self-consistent C_p Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the gibbs free energy [J/mol] as a function of pressure [Pa] and temperature [K]. helmholtz_free_energy(pressure, temperature, volume, params)[source] Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral entropy(pressure, temperature, volume, params)[source] Returns the entropy [J/K/mol] as a function of pressure [Pa] and temperature [K]. enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy [J/mol] as a function of pressure [Pa] and temperature [K]. molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns the heat capacity [J/K/mol] as a function of pressure [Pa] and temperature [K]. validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ molar_internal_energy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral ### HP_TMTL (2011 liquid formulation)¶ class burnman.eos.HP_TMTL[source] Base class for the thermal equation of state described in , but with the Modified Tait as the static part, as described in . An instance “m” of a Mineral can be assigned this equation of state with the command m.set_method(‘hp_tmtL’) (or by initialising the class with the param equation_of_state = ‘hp_tmtL’ volume(pressure, temperature, params)[source] Returns volume [m^3] as a function of pressure [Pa] and temperature [K] pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume. isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. shear_modulus(pressure, temperature, volume, params)[source] Not implemented. Returns 0. Could potentially apply a fixed Poissons ratio as a rough estimate. molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol]. thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity at the pressure, temperature, and volume [1/K] molar_heat_capacity_p0(temperature, params)[source] Returns heat capacity at ambient pressure as a function of temperature [J/K/mol] Cp = a + bT + cT^-2 + dT^-0.5 in . Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the gibbs free energy [J/mol] as a function of pressure [Pa] and temperature [K]. helmholtz_free_energy(pressure, temperature, volume, params)[source] Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral entropy(pressure, temperature, volume, params)[source] Returns the entropy [J/K/mol] as a function of pressure [Pa] and temperature [K]. enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy [J/mol] as a function of pressure [Pa] and temperature [K]. molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns the heat capacity [J/K/mol] as a function of pressure [Pa] and temperature [K]. validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ molar_internal_energy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral ### HP98 (1998 formulation)¶ class burnman.eos.HP98[source] Base class for the thermal equation of state described in . An instance “m” of a Mineral can be assigned this equation of state with the command m.set_method(‘hp98’) (or by initialising the class with the param equation_of_state = ‘hp98’ volume(pressure, temperature, params)[source] Returns volume [m^3] as a function of pressure [Pa] and temperature [K] pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume. isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. shear_modulus(pressure, temperature, volume, params)[source] Not implemented. Returns 0. Could potentially apply a fixed Poissons ratio as a rough estimate. molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol]. thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity at the pressure, temperature, and volume [1/K] molar_heat_capacity_p0(temperature, params)[source] Returns heat capacity at ambient pressure as a function of temperature [J/K/mol] Cp = a + bT + cT^-2 + dT^-0.5 in . Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the gibbs free energy [J/mol] as a function of pressure [Pa] and temperature [K]. helmholtz_free_energy(pressure, temperature, volume, params)[source] Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral entropy(pressure, temperature, volume, params)[source] Returns the entropy [J/K/mol] as a function of pressure [Pa] and temperature [K]. enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy [J/mol] as a function of pressure [Pa] and temperature [K]. molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns the heat capacity [J/K/mol] as a function of pressure [Pa] and temperature [K]. validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ molar_internal_energy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral ## De Koker Solid and Liquid Formulations¶ ### DKS_S (Solid formulation)¶ class burnman.eos.DKS_S[source] Base class for the finite strain solid equation of state detailed in (supplementary materials). volume_dependent_q(x, params)[source] Finite strain approximation for $$q$$, the isotropic volume strain derivative of the grueneisen parameter. volume(pressure, temperature, params)[source] Returns molar volume. $$[m^3]$$ pressure(temperature, volume, params)[source] Returns the pressure of the mineral at a given temperature and volume [Pa] grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter $$[unitless]$$ isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$[Pa]$$ Returns adiabatic bulk modulus. $$[Pa]$$ shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus. $$[Pa]$$ molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity. $$[1/K]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy at the pressure and temperature of the mineral [J/mol] helmholtz_free_energy(pressure, temperature, volume, params)[source] Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ ### DKS_L (Liquid formulation)¶ class burnman.eos.DKS_L[source] Base class for the finite strain liquid equation of state detailed in (supplementary materials). pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] volume(pressure, temperature, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns volumefloat Molar volume of the mineral. $$[m^3]$$ isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$[Pa]$$ Returns adiabatic bulk modulus. $$[Pa]$$ grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter. $$[unitless]$$ shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus. $$[Pa]$$ Zero for fluids molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity. $$[1/K]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy at the pressure and temperature of the mineral [J/mol] helmholtz_free_energy(pressure, temperature, volume, params)[source] Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] molar_internal_energy(pressure, temperature, volume, params)[source] Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ ## Anderson and Ahrens (1994)¶ class burnman.eos.AA[source] Class for the :mathE-V-S liquid metal EOS detailed in . Internal energy ($$E$$) is first calculated along a reference isentrope using a fourth order BM EoS ($$V_0$$, $$KS$$, $$KS'$$, $$KS''$$), which gives volume as a function of pressure, coupled with the thermodynamic identity: $$-\partial E/ \partial V |_S = P$$. The temperature along the isentrope is calculated via $$\partial (\ln T)/\partial (\ln \rho) |_S = \gamma$$ which gives: $$T_S/T_0 = \exp(\int( \gamma/\rho ) d \rho)$$ The thermal effect on internal energy is calculated at constant volume using expressions for the kinetic, electronic and potential contributions to the volumetric heat capacity, which can then be integrated with respect to temperature: $$\partial E/\partial T |_V = C_V$$ $$\partial E/\partial S |_V = T$$ We note that also include a detailed description of the Gruneisen parameter as a function of volume and energy (Equation 15), and use this to determine the temperature along the principal isentrope (Equations B1-B10) and the thermal pressure away from that isentrope (Equation 23). However, this expression is inconsistent with the equation of state away from the principal isentrope. Here we choose to calculate the thermal pressure and Grueneisen parameter thus: 1) As energy and entropy are defined by the equation of state at any temperature and volume, pressure can be found by via the expression: $$\partial E/\partial V |_S = P$$ 2) The Grueneisen parameter can now be determined as $$\gamma = V \partial P/\partial E |_V$$ To reiterate: away from the reference isentrope, the Grueneisen parameter calculated using these expressions is not equal to the (thermodynamically inconsistent) analytical expression given by . A final note: the expression for $$\Lambda$$ (Equation 17). does not reproduce Figure 5. We assume here that the figure matches the model actually used by , which has the form: $$F(-325.23 + 302.07 (\rho/\rho_0) + 30.45 (\rho/\rho_0)^{0.4})$$. volume_dependent_q(x, params)[source] Finite strain approximation for $$q$$, the isotropic volume strain derivative of the grueneisen parameter. volume(pressure, temperature, params)[source] Returns molar volume. $$[m^3]$$ pressure(temperature, volume, params)[source] Returns the pressure of the mineral at a given temperature and volume [Pa] grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter $$[unitless]$$ isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus $$[Pa]$$ Returns adiabatic bulk modulus. $$[Pa]$$ shear_modulus(pressure, temperature, volume, params)[source] Returns shear modulus. $$[Pa]$$ Zero for a liquid molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure. $$[J/K/mol]$$ thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity. $$[1/K]$$ Currently found by numerical differentiation (1/V * dV/dT) gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy at the pressure and temperature of the mineral [J/mol] E + PV molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy at the pressure and temperature of the mineral [J/mol] entropy(pressure, temperature, volume, params)[source] Returns the entropy at the pressure and temperature of the mineral [J/K/mol] enthalpy(pressure, temperature, volume, params)[source] Returns the enthalpy at the pressure and temperature of the mineral [J/mol] E + PV helmholtz_free_energy(pressure, temperature, volume, params)[source] Returns the Helmholtz free energy at the pressure and temperature of the mineral [J/mol] E - TS validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ ## CoRK¶ class burnman.eos.CORK[source] Class for the CoRK equation of state detailed in [HP91]. The CoRK EoS is a simple virial-type extension to the modified Redlich-Kwong (MRK) equation of state. It was designed to compensate for the tendency of the MRK equation of state to overestimate volumes at high pressures and accommodate the volume behaviour of coexisting gas and liquid phases along the saturation curve. grueneisen_parameter(pressure, temperature, volume, params)[source] Returns grueneisen parameter [unitless] as a function of pressure, temperature, and volume. volume(pressure, temperature, params)[source] Returns volume [m^3] as a function of pressure [Pa] and temperature [K] Eq. 7 in Holland and Powell, 1991 isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns isothermal bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. EQ 13+2 shear_modulus(pressure, temperature, volume, params)[source] Not implemented. Returns 0. Could potentially apply a fixed Poissons ratio as a rough estimate. molar_heat_capacity_v(pressure, temperature, volume, params)[source] Returns heat capacity at constant volume at the pressure, temperature, and volume [J/K/mol]. thermal_expansivity(pressure, temperature, volume, params)[source] Returns thermal expansivity at the pressure, temperature, and volume [1/K] Replace -Pth in EQ 13+1 with P-Pth for non-ambient temperature molar_heat_capacity_p0(temperature, params)[source] Returns heat capacity at ambient pressure as a function of temperature [J/K/mol] Cp = a + bT + cT^-2 + dT^-0.5 in Holland and Powell, 2011 molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns heat capacity at constant pressure at the pressure, temperature, and volume [J/K/mol] Returns adiabatic bulk modulus [Pa] as a function of pressure [Pa], temperature [K], and volume [m^3]. gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the gibbs free energy [J/mol] as a function of pressure [Pa] and temperature [K]. pressure(temperature, volume, params)[source] Returns pressure [Pa] as a function of temperature [K] and volume[m^3] validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral entropy(pressure, temperature, volume, params) Returns the entropy at the pressure and temperature of the mineral [J/K/mol] helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral molar_internal_energy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ufloat Internal energy of the mineral ## Brosh et al. (2007)¶ Class for the high pressure CALPHAD equation of state by [BMS07]. volume(pressure, temperature, params)[source] Returns volume $$[m^3]$$ as a function of pressure $$[Pa]$$. pressure(temperature, volume, params)[source] Parameters volumefloat Molar volume at which to evaluate the equation of state. [m^3] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns pressurefloat Pressure of the mineral, including cold and thermal parts. [m^3] isothermal_bulk_modulus(pressure, temperature, volume, params)[source] Returns the isothermal bulk modulus $$K_T$$ $$[Pa]$$ as a function of pressure $$[Pa]$$, temperature $$[K]$$ and volume $$[m^3]$$. Returns the adiabatic bulk modulus of the mineral. $$[Pa]$$. shear_modulus(pressure, temperature, volume, params)[source] Returns the shear modulus $$G$$ of the mineral. $$[Pa]$$ molar_internal_energy(pressure, temperature, volume, params)[source] Returns the internal energy of the mineral. $$[J/mol]$$ gibbs_free_energy(pressure, temperature, volume, params)[source] Returns the Gibbs free energy of the mineral. $$[J/mol]$$ entropy(pressure, temperature, volume, params)[source] Returns the molar entropy of the mineral. $$[J/K/mol]$$ molar_heat_capacity_p(pressure, temperature, volume, params)[source] Returns the molar isobaric heat capacity $$[J/K/mol]$$. For now, this is calculated by numerical differentiation. thermal_expansivity(pressure, temperature, volume, params)[source] Returns the volumetric thermal expansivity $$[1/K]$$. For now, this is calculated by numerical differentiation. grueneisen_parameter(pressure, temperature, volume, params)[source] Returns the grueneisen parameter. This is a dependent thermodynamic variable in this equation of state. calculate_transformed_parameters(params)[source] This function calculates the “c” parameters of the [BMS07] equation of state. validate_parameters(params)[source] Check for existence and validity of the parameters density(volume, params) Calculate the density of the mineral $$[kg/m^3]$$. The params object must include a “molar_mass” field. Parameters volumefloat Molar volume of the mineral. For consistency this should be calculated usingfunc:volume. $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns densityfloat Density of the mineral. $$[kg/m^3]$$ enthalpy(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. [Pa] temperaturefloat Temperature at which to evaluate the equation of state. [K] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Hfloat Enthalpy of the mineral helmholtz_free_energy(pressure, temperature, volume, params) Parameters temperaturefloat Temperature at which to evaluate the equation of state. [K] volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). [m^3] paramsdictionary Dictionary containing material parameters required by the equation of state. Returns Ffloat Helmholtz free energy of the mineral molar_heat_capacity_v(pressure, temperature, volume, params) Parameters pressurefloat Pressure at which to evaluate the equation of state. $$[Pa]$$ temperaturefloat Temperature at which to evaluate the equation of state. $$[K]$$ volumefloat Molar volume of the mineral. For consistency this should be calculated using volume(). $$[m^3]$$ paramsdictionary Dictionary containing material parameters required by the equation of state. Returns C_Vfloat Heat capacity at constant volume of the mineral. $$[J/K/mol]$$
{}
Question If the gauge pressure inside a rubber balloon with a 10.0-cm radius is 1.50 cm of water, what is the effective surface tension of the balloon? $3.68 \textrm{ N/m}$ Solution Video
{}
# Question on division field of abelian variety I am wondering if the following holds or not. Let A be an abelian variety of dimension $d\geq 1$ over $\mathbb{Q}$. Then there is a positive number c depending on d and A such that $[\mathbb{Q}(A[n]):\mathbb{Q}]\geq c^{w(n)} n^2$ where $w(n)$ is the number of distinct prime factors of n. I know that it must hold for d=1(elliptic curve case). I guess that we can even have stronger inequality like $[\mathbb{Q}(A[n]):\mathbb{Q}]\geq c^{w(n)} |GSp_{2d}(Z/nZ)|$ ,but don't know how to prove it, and couldn't find any reference other than d=1 case. Thanks, Sungjin Kim. - Perhaps you should change the $n^2$ to $n^{2d}$. – S. Carnahan Feb 3 '12 at 3:05 Conjecturally, for a large enough prime $\ell$, the image of the Galois representation on the points of $\ell$-division is the $\mathbf Z/\ell$-points of the Mumford-Tate group. So such an inequality would not hold unless the M-T group is $\mathop{\rm GSp}_{2d}$. Already for elliptic curves with complex multiplication, the lower bound is $\ell^2$, and not $\ell^3\approx |\mathop{\rm SL}_2(\mathbf Z/\ell)|$. – ACL Feb 3 '12 at 7:25 According to a result of Serre (see "R\'esum\'e des cours de 1985-1986". Coll`ege de France (1986)), for any $\epsilon>0$, there is a constant $C=C(A,\epsilon)$, such that $[{\bf Q}(P):{\bf Q}]\geq C\cdot n^{1-\epsilon}$ if $P\in A[n](\bar{\bf Q})$. If $A_{\bar{\bf Q}}$ does not contain any subvariety of CM type then one even has $[{\bf Q}(P):{\bf Q}]\geq C\cdot n^{2-\epsilon}$. Since $[{\bf Q}(A[n]):{\bf Q}]\geq [{\bf Q}(P):{\bf Q}]$, this applies to your situation. For further references and results, see A. Silverberg's article "Torsion points on abelian var. of CM type", Compositio 68. – Damian Rössler Feb 9 '12 at 12:02 @Damian Rossler Thank you for the answer. However, $[\mathbb{Q}(A[n]):\mathbb{Q}]\geq C\cdot n^{1-\epsilon}$ is not enough for my purpose. I wonder if there is a result with the exponent greater than 1. That is why I specifically put $n^2$ there. For $1-\epsilon$ result, it follows by $\mathbb{Q}(A[n])\supseteq \mathbb{Q}(\zeta_n)$. – i707107 Feb 10 '12 at 0:14 If $A_{\bar{\bf Q}}$ does not contain any subvariety of CM type then $2-\epsilon$ works (see my last comment). Is even that not enough ? Do you have to deal with abelian varieties of CM type ? (in that case I don't know what to suggest). – Damian Rössler Feb 10 '12 at 9:26
{}
# Programmatically Illustrating an Expansion Permutation Based on the discussion over here, I drew this: With this code: \documentclass[tikz,border=3.14mm]{standalone} \usepackage{tikz} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \usetikzlibrary{shapes,arrows,arrows.meta} \tikzset{cross line/.style={>=Stealth,shorten >=5pt,shorten <=5pt, preaction={>=Stealth, draw=white, shorten >=5pt, shorten <=5pt, line width=1.6pt}}} %\tikzstyle{bullet} = [draw, circle,fill=cyan!20, minimum height=2.5em, text centered] \begin{document} \begin{tikzpicture}[pics/perms/.style={code={ \foreach \X [count=\Y] in {#1} { \begin{pgfonlayer}{foreground} \node[draw,bullet,] (T\Y) at (\Y,5) {\ifnum\Y<10 0\fi\Y}; \node[draw,bullet] (B\the\numexpr\X) at (\Y,-5) {\ifnum\Y<10 0\fi\Y}; \end{pgfonlayer} } \foreach \X [count=\Y] in {#1} { \draw[->,>=Stealth,shorten <=-0.35pt] (T\Y.south) -> (B\Y.north); \draw[cross line] (T\Y.south) -> (B\Y.north); } }},bullet/.style={circle,fill=blue!20,line width=.4pt,text=black,text width={width("33")},align=center}, scale=0.5,transform shape] \pic{perms={3,2,1}}; \end{tikzpicture} \end{document} My question is: What's an elegant way to do the same thing with a permutation where some numbers are mapped more than once. If they're all mapped twice, it's fine you can just do: \pic{perms={3,1,2}}; \pic{perms={2,3,1}}; But you can't map one twice, unless they're all mapped twice. For instance, this will show an error: \pic{perms={3,1,2}}; \pic{perms={2, , }}; And you can't map something like 1 to 48 unless there are at least 48 numbers in the list. Here's an example of an expansion permutation from Wikipedia: This problem is in a way simpler than the original one, at least if you know that there is an undocumented math function dim in pgfmathfunctions.misc.code.tex. This allows one to make it possible to add lists instead of single items for those cases in which one number should get mapped to multiple targets. dim can then be used to decide whether this is a single item or a list. \documentclass[tikz,border=3.14mm]{standalone} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \pgfdeclarelayer{background} \pgfdeclarelayer{foreground} \pgfsetlayers{background,main,foreground} \usetikzlibrary{arrows.meta} \tikzset{cross line/.style={>=Stealth,shorten >=5pt,shorten <=5pt, preaction={>=Stealth, draw=white, shorten >=5pt, shorten <=5pt, line width=1.6pt}}} \begin{document} \begin{tikzpicture}[pics/perms/.style={code={ \foreach \X [count=\Y] in {#1} {\pgfmathtruncatemacro{\mydim}{dim(\X)} \node[draw,bullet] (T\Y) at (\Y,5) {\ifnum\Y<10 0\fi\Y}; \ifnum\mydim=1 \begin{pgfonlayer}{foreground} \node[draw,bullet] (B\the\numexpr\X) at (\X,-5) {\ifnum\X<10 0\fi\X}; \end{pgfonlayer} \draw[->,>=Stealth,shorten <=-0.35pt] (T\Y.south) -- (B\X.north); \draw[cross line] (T\Y.south) -- (B\X.north); \else \foreach \Z in \X {\begin{pgfonlayer}{foreground} \node[draw,bullet] (B\the\numexpr\Z) at (\Z,-5) {\ifnum\Z<10 0\fi\Z}; \end{pgfonlayer} \draw[->,>=Stealth,shorten <=-0.35pt] (T\Y.south) -- (B\Z.north); \draw[cross line] (T\Y.south) -- (B\Z.north); } \fi} }},bullet/.style={circle,fill=blue!20,line width=.4pt,text=black,text width={width("33")},align=center}, scale=0.5,transform shape] \pic{perms={3,5,{1,4},{2,6}}}; \end{tikzpicture} \end{document}
{}
# Prove that external charge has no net flux on a gaussian surface mathematically I understand how the field lines enter and leave the gaussian surface. But my concern is that the field isn't constant everywhere on the gaussian surface, i.e, there exactly doesn't exist an $$- E\cdot da$$ corresponding to every $$E\cdot da$$. I understand the idea of how the enlargement of the area compensates for the reduction in the field. But I want it mathematically. If you could prove that for a point charge on an irregular surface, the job would be done. I have however, made an attempt to prove it in the below image, where I basically proved how the flux doesn't depend on $$r$$ for an infinitesimally small surface $$da$$. But I am not satisfied with it. Please help. Ignore the proof if it isn't good enough and doesn't make sense, I am just a fresher, hence a noob. [Proof]1 • You might want to check out Stokes' theorem en.wikipedia.org/wiki/Stokes%27_theorem Sep 30 '20 at 9:04 • To clarify you are asking how to prove this generally even in a non uniform field right? I have a proof for uniform field. Sep 30 '20 at 11:14 • your proof has a problem as it only holds for a single point charge Sep 30 '20 at 11:15 • @Buraian , Any field is nothing but an arrangement or superposition of a set of single point charges, So if I can prove it for single external point charge. I can safely say it is valid for any arrangement or a system of multiple point charges combined. So yes, I want it for a non uniform field ( as point charge varies inversely with r^2, it is non-uniform). However, I would appreciate it, if you post it for uniform field too (if it's for an irregular surface) but I think I already know it. Still Thanks a lot, really appreciate your help. :D Sep 30 '20 at 12:01 • Sep 30 '20 at 17:39 Suppose we have a point charge $$q$$ at the origin $$\vec{r}=0$$. Then choose an arbitrary Gaussian surface $$S$$ enclosing a volume $$V$$. By definition of flux, the electric flux through the surface is $$\Phi=\iint_S\vec{E}\cdot\vec{dS}$$ By the divergence theorem, this is equal to $$\Phi=\iiint_V\nabla\cdot\vec{E}\ dV\tag{1}$$ Then, since we know the form of $$\vec{E}$$, namely $$\vec{E}=\frac{1}{4\pi\varepsilon_0}\frac{q}{r^2}\hat{r}$$ we can calculate directly its divergence $$\nabla\cdot\vec{E}=\frac{q}{4\pi\varepsilon_0}\nabla\cdot\left(\frac{\hat{r}}{r^2}\right)=\frac{q}{\varepsilon_0}\delta^3(\vec{r})\tag{2}$$ where in the last step I have used the mathematical identity1 $$\nabla\cdot\left(\frac{\hat{r}}{r^2}\right)=4\pi\delta^3(\vec{r}).$$ Inserting $$(2)$$ in $$(1)$$ we have $$\Phi=\iiint_V\frac{q}{\varepsilon_0}\ \delta^3(\vec{r})\ dV$$ And finally, if the surface does not enclose the charge, i.e., $$\vec{r}=0\notin V$$, the last integral vanishes due to the translation property of the Dirac delta2. 1Take a look at this Math.SE post for details. 2Here it is $$\iiint_{V}\delta^3(\vec{r}-\vec{r}_0)\ dV=\begin{cases}0\quad\text{if }\vec{r}_0\notin V\\1\quad\text{if }\vec{r}_0\in V\end{cases}$$ • Amazing! Thank you. Totally satisfied with the answer. However to accept an answer,I am waiting for someone to derive it from the very basics, i.e, without using Dirac delta and divergence theorem. I mean just from the integral E.da part of the theory. The reason being that it will be relatively easy to understand for a larger set of audience (those who don't know Dirac delta and divergence) . Else I don't have any issue with your answer, It's totally perfect. Oct 5 '20 at 14:10
{}
Is this puzzle like a mathematics exercise? Result: Thanks to Dan Russell’s thoughtful answer and others’ constructive comments, the puzzle statement has been reupholstered, the puzzle has been reopened, and a complete solution is imminent. Original post The rainbow mystery below was readily understood by three posters and two other commenters as the puzzle it was meant to be. The three posted answers presented astutely-targeted partial solutions based on clues, well before “off topic” voting began and led to closure even though this puzzle was off to a good start with no symptoms of being off topic. Question of consistency: Is there something about this puzzle that lands it in the same bin as a mathematics “problem” vs puzzle? Bonus question: Should it have been presented differently? $\small\dfrac{ \raise-.8ex{\scriptsize+} \raise-1ex5 \, }0 ~$ Two many rainbows? [on hold] Wish I’d had a camera at the time, but a cartoon will have to do. This represents a direct view of two actual incomplete rainbow arcs that stop in midair where they cross, lit only by a setting sun. How could this be? I honestly wondered if it was a dream. What is the simplest explanation for this odd pair of rainbows? Why is the smaller one slightly brighter? Notes. Each main arch was accompanied by a rainbow’s usual set of concentric fainter arcs, which exhibited the same phenomenon of stopping sharply where they crossed exactly above or below the main crossing point. Only air is between the point of view and the rainbows. Safe to guess that this effect never occurred more than a century or two ago. Some details of the real-life story have been altered in order to stymie internet searches. The following comment registered stupefyingly many upticks. I'm voting to close this question as off-topic because this belongs on physics.stackexchange.com. – closevoter with, ironically, no apparent affiliation to physics.stackexchange.com At least the commenter was considerate enough to share their reasoning, but the confident tone may have misled others to take evaluation shortcuts as it might have given the impression that someone had already accurately assessed the puzzle. When this comment’s upticks correlated to closevotes, astonishment led to the impression that all closevoters combined for almost no apparent presence whatsoever at any science SE sites, finding a total of one comment and one all-but-ignored post at any such site, which was not Physics SE. Update: 4 of the upticks may have been automatically applied by the system without closevoters specifically endorsing the comment’s details. Did any closevoter realize that essentially the same puzzle could have been stated without science, in terms of [clickable/hoverable hint /spoiler] for example? A pair of rainbows is just an especially intriguing manifestation of the solution, with natural clues that make it a better puzzle, and happens to be how the paradox presented itself in real life. Did any closevoter genuinely imagine a complete solution that would verify that this is not a puzzle? The solution is quite simple and probably understandable by most solvers, but does hinge on a less-than-obvious aha-like detail from everyday experience. Perhaps we could see specific reasoning posted here from those who voted to close, but please do not be tempted into defensive rationalization. Mistaken closures based on hunches occur often enough that I openly suspect this to be merely another instance and hope that we can clarify a nebulous border between on- and off-topicality. • I remain in neutral territory, but just fyi, I think when you VTC with a comment, then when others select the same VTC reason, the system automatically upvotes the comment. – Alconja Oct 22 '16 at 21:35 • How would the system know which comment to upvote? At times there were earlier and later comments that were not upticked. – humn Oct 22 '16 at 21:41 • because the system made the comment too. A user selects VTC, and enters a custom reason, the system add that reason as a comment, on the user's behalf. (I think) – Alconja Oct 22 '16 at 21:43 • Interesting, but in this case the comment was posted minutes before the initial close vote. Would indeed make a difference to know if these upticks were automatic. – humn Oct 22 '16 at 21:44 • perhaps I'm just outright wrong then. :) Either way, your point (re: people taking stock in a confidently worded comment), is still valid. – Alconja Oct 22 '16 at 21:48 • Alconja is correct. When a user selects a custom close reason, they type "I'm voting to close because blah blah blah" into the question-closing popup, which then gets automatically added as a comment under their name. If a later close-voter selects the same custom close reason (e.g. from the review queue), then an upvote is automatically added from their account to the comment. That accounts for up to 4 of the upvotes on Ian's comment; the remaining upvotes must have been just from people who agreed with the comment without voting to close. – Rand al'Thor Oct 23 '16 at 1:01 • I didn't vote to close, because this is a remarkably amazing, and yet confusing, puzzle :) – ABcDexter Oct 24 '16 at 5:40 • I would've kept this open as well - I think the catalyst for the close votes was the phrasing that made it seem like any ordinary Physics.SE question. – Deusovi Oct 25 '16 at 18:10 • I still think this question belongs better on physics.SE. It seems that you're asking us to explain some natural phenomenon, and in my opinion if that were the case I wouldn't really consider that a puzzle. – Buffer Over Read Oct 27 '16 at 0:19
{}
# Thread: Equation of perpendicular bisector 1. ## Equation of perpendicular bisector Next Question .... Sorry :P The line with equation x+3y=12 meets the x and y axes at the points A and B respectively. Find the equation of the perpendicular bisector of AB 2. This requires a couple of steps. The words look all googly and weird but its pretty basic: You have the equation for a line. You are told to find the bisector (the midpoint) of the line formed by the points A and B, which are your X and Y intercepts. Therefore: $x+3y=12 \Longrightarrow x-12=3y \Longrightarrow \frac{12-x}{3}=y \Longrightarrow 4-\frac{1}{3}x=y$ To find the X-int, we set Y=0 and solve: $0=4-\frac{1}{3}x \Longrightarrow \frac{1}{3}x=4 \Longrightarrow x=12$ Alternatively, we could have used out original equation. To find the Y=int, we set X=0 and solve. This one is straight forward. Y=4. Now we have the co-ordinates of point A and B: (12,0) and (0,4) respectively. Our question asks us to find the equation of the perp-bisector. Now for a line to be perpendicular to another line, it must have the a slope equal to the negative reciprocal of the line it is to be perpendicular to. The slope of our original line is simple to see: $-\frac{1}{3}$. Therefore the negative reciprocal is 3. Thus the slope of our perpendicular line is 3. But we aren't done. We have a very specific point we need to use in order to make sure this line bisects the X and Y intercepts of our original equation. Thus, we need to use the midpoint formula $\frac{x_1+x_2}{2},\frac{y_1+y_2}{2}$, to find the midpoint between (12,0) and (0,4): $\frac{12+0}{2},\frac{0+4}{2} \Longrightarrow (6,2)$ Now, we have a point (6,2), a slope of 3 and a form called the point-slope form of an equation: $y-2=3(x-12) \Longrightarrow y=3x-36+2 \Longrightarrow y=3x-34$ 3. Originally Posted by Math-DumbassX Next Question .... Sorry :P The line with equation x+3y=12 meets the x and y axes at the points A and B respectively. Find the equation of the perpendicular bisector of AB
{}
# How do you solve log_2x+log_2 3=3? Aug 21, 2016 I found: $x = \frac{8}{3}$ #### Explanation: You can condense the two logs and write: ${\log}_{2} \left(3 x\right) = 3$ use the definition of log and write: $3 x = {2}^{3}$ $3 x = 8$ $x = \frac{8}{3}$ Aug 21, 2016 $x = \frac{8}{3}$ #### Explanation: As ${2}^{3} = 8$, we have ${\log}_{2} 8 = 3$, hence ${\log}_{2} x + {\log}_{2} 3 = 3 = {\log}_{2} 8$ or log_2(3×x)=log_2 8 or $3 x = 8$ or $x = \frac{8}{3}$
{}
Domain and Range 1. Jun 21, 2007 kuahji Find the domain & range of the function. 2x^2 + 4x - 3 I attempted to solved doing the following 2x^2 + 4x = 3 x^2 + 2x = 3/2 (divided by two) (x+1)^2 = 5/2 (completed the square & added one to both sides) (x+1)^2 - 5/2 = 0 So I put the range was (-5/2, infinity), but the book has it (-5, infinity). It seems any problem where the leading coefficient is greater than one, I'm getting incorrect answers. So there must be an error in how I'm trying to solve the problem. 2. Jun 21, 2007 neutrino I don't understand why you have equated the function to zero. As it stands, the range and domain is the real line, assuming x and f(x) are real. Are there any conditions on x and/or f? 3. Jun 21, 2007 kuahji Figured out the way to do it, sorry. 2x^2 + 4x -3 2x^2 + 4x = 3 2(x^2 + 2x) = 3 2(x+1)^2 = 5 2(x+1)^2 - 5 Even though now I'm kinda curious as to why the first method I tried was incorrect. Like why can't you just divide the whole thing by two. 4. Jun 22, 2007 Gib Z Well the first time you tried you Assumed that it was equal to zero. And because of that, you changed it to another polynomial, where the co efficients are divided by 2. It has the same ZERO's but different values for other things. The second time you tried you didn't assume anything, you just wrote the expression in another completely equivalent way. 5. Jun 22, 2007 radou As Gib Z suggested, "2x^2 + 4x - 3" means nothing. Write "f(x) = 2x^2 + 4x - 3" to avoid primary confusion. 6. Jun 26, 2007 kuahji Thanks, this explanation helps. 7. Jun 29, 2007 i.mehrzad Well i think it makes sense equating the derivative to zero so that you can find out wether there is a maxima or minima. After finding the value of x for which there is a maxima or minima then you will have to find the value of the maxima or minima. 8. Jun 29, 2007 morson Derivatives should not even be considered when determining the domain and range of a real-valued quadratic function. There exists quite an easy way to express $$f(x) = ax^2 + bx + c, a \not=\ 0$$ in the form $$f(x) = a(x - h)^2 + k$$. If a is negative, then k is the maximum value of the function, and if it is positive, then k represents the minimum value of the function. The "completing the square" method really is much quicker. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add?
{}
Adobe Photoshop 2022 (Version 23.0.1) Crack+ [2022] Tip You may need to choose the Rectangle tool or Ellipse tool instead of the Brush tool to draw the image you’re going to use in place of the photo. 9. **Click once anywhere on the canvas to select the entire image with the Rectangle or Ellipse tool.** If you choose the Rectangle or Ellipse tool and then click to define a shape, the image’s background colors (white or gray) are covered with that color, and you can only see the pixels of the image that you define. 10. **Open the Brush tool and select No Stroke.** If you haven’t yet defined a stroke in the Brush tool, this option gives you a flat brush for painting. When you have your brush set to No Stroke, you can change the color and size by choosing a color from the Color Picker or using the slider. 11. **Click in an area of the image that you wish to use as a replacement photo.** Use the Adobe Photoshop 2022 (Version 23.0.1) Crack+ [Latest] Unlike normal Photoshop, it doesn’t have the feature you won’t find in any other editing software: Photoshop matte-painted video, which can be created with a seamless stitch. It also doesn’t have a great deal of the features found in professional software such as Photoshop’s brushstroke editing, typography, video editing or the complete photo retouching that comes with the professional license. It has a free version (Windows and Mac versions), the one included with every Mac computer, a version for Windows (either Home or Professional), a version for Windows tablets and a version for Android phones. This guide will help you navigate the basic features, moving through the steps of editing or editing your creations. You can find the relevant keyboard shortcuts, tutorials and tutorials in the free version of Photoshop. Photoshop With the name the word photograph, Photoshop speaks its language. The world of photography itself, the work of the photographer and the relationship of all these elements to a timeless image are as well-equipped as the tool in knowing the language of photographs. Photoshop is the software used for editing photographs, but it can also be used to edit other types of images and manipulate videos. It’s a very old tool, which is now easier and better than ever, but has nothing to do with learning Photoshop. Now we must put things right and review the history of Photoshop. To Photoshop: The Birth and Early Years At the start of the 1980s a lot of shops specialized in technical and image editing. The graphics teams that were hired in advertising agencies were responsible for editing photographs before they appeared in advertisements or catalogs. In the early days of photography, small companies had a team of engineers who only had to make cuts and blend images, while the ad agencies consisted of people who only had a budget and were charged with advertising sales or creating images. Photographers had a more traditional purpose, to make images that would stand out, and this job was done by graphic designers. Photoshop came about with the recognition of the fact that editing photographs can be a lengthy, laborious process that requires the aid of a computer. The introduction of Photoshop accelerated the editing process and enabled an entirely new group of people to become serious about photography. Many graphic designers who were already used to working on images with a computer now started to process images and photographs. Although a lot of photographers 388ed7b0c7 Adobe Photoshop 2022 (Version 23.0.1) Keygen Full Version [32|64bit] Experimenting with Own Ideas for A New Blender (Exclusive) – azhenley ====== akx There are a lot of interesting topics in blender development. It’s rather surprising to see this blog post on HN, it is actually a pretty basic tech interview and has the tone of being written by somebody who wants to talk to Blender developers in the hopes of landing a job as a junior developer. Data centers are increasingly used by enterprises for storing data, running applications, and/or providing other services. A data center housing a large number of interconnected computers is called a “large-scale” data center. A data center housing fewer computers is called a “small-scale” data center. The term “cloud computing” is used to describe a computing model in which typically large-scale data centers are interconnected via a network, such as the Internet, and hosted by operators of the data centers. Cloud computing typically involves over-the-Internet provision of dynamically scalable and often virtualized resources.et al.,* (Plenum, New York, 1990), Vol. 1, p. 265. **Figure Captions** Figure 1: Proton-proton total cross section as a function of $p_{cm}$. The normalizations of data from Refs. [@rohr1; @rohr2; @rohr3; @rohr4] are consistent with these data. Figure 2: Differential cross sections and ratios for $\pi^{+}$ production in p-Be interactions. The curves are from Ref. [@pionppro]. Figure 3: Differential cross sections and ratios for $\pi^{ – }$ production in p-Be interactions. The curves are from Ref. [@pionpmprod]. Figure 4: Differential cross sections and ratios for $\pi^{ – }$ production in p-C interactions. The solid curves are the differential cross sections with $\alpha_{q} = 1.75$, while the dashed curves are from Ref. [@pionpcprod]. The data are from Ref. [@pionpcprod], except those labeled (1)–(4 What’s New In? 12**]{} 669–681 Li Y, Zhang B, Wang H 2009 [*Appl. Phys. Lett.*]{} [**94**]{} 103106 Li Y, Zhang B, Yang K 2009 [*J. Phys. Chem. C*]{} [**113**]{} 6096–6100 Li Y, Huang Z, Wang H 2009 [*Appl. Phys. Lett.*]{} [**95**]{} 183113 Li Y, Xia W, Zhang J, Zhang Z, Sheng H, Wang H, Sheng W, Fu L, Yang J 2009 [*Appl. Phys. Lett.*]{} [**95**]{} 183104 Li Y, Wang H, Wu X, Huang Z, Xu Z, Zhang Z, Chang L, Yu G, Yang J, Sheng H 2010 [*Phys. Rev. Lett.*]{} [**104**]{} 156601 Li Y, Wu X, Huang Z, Li S, Zhang Z, Li J, Yu G, Li X, Chang L, Sheng H 2011 [*Appl. Phys. Lett.*]{} [**98**]{} 053107 Li Y, Xu X, Wu X, Wang B, Lu H, Zhang Z, Sheng H 2012 [*Appl. Phys. Lett.*]{} [**101**]{} 243111 Huang Z, Wu X, Li Y, Zhang Z, Yang J, Zhang J, Sheng H 2010 [*Phys. Rev. B*]{} [**81**]{} 064116 Huang Z, Wu X, Li Y, Zhang Z, Fu L, Yang J, Zhu J, Sheng H 2011 [*Appl. Phys. Lett.*]{} [**99**]{} 261902 Huang Z, Wu X, Li Y, Zhang Z, Fu L, Yang J, Zhu J, Sheng H 2011 [*Appl. Phys. Lett.*]{} [**99**]{} 261901 Sheng H, Li H, Guo Z, Ma Y 2011 [*Appl. Phys. Lett.*]{} [**98**]{} 151906 Wu X, Huang Z, Li Y, Zhang System Requirements For Adobe Photoshop 2022 (Version 23.0.1): Pre-Requisites: 1. OS should be at least XP SP2. 2. Basic understanding of PC concepts is needed. 3. A hard drive of at least 20GB free space. 4. 3D card (compatible with DirectX 9). 5. Minimum 1 GB RAM. 6. (Optional) PULSE signature provided by “Frodo’s device” could be used, but without this one does not receive any device at all.
{}
### Turbulent drag reduction by oblique wavy wall undulations File Description SizeFormat Title: Turbulent drag reduction by oblique wavy wall undulations Authors: Ghebali, Sacha Item Type: Thesis or dissertation Abstract: The turbulent flow past a horizontal wavy wall, positioned at an angle to the main flow direction, is investigated by means of Direct Numerical Simulations with the purpose of reducing the drag. The concept aims to emulate the active control by spanwise wall oscillations---known for its high drag-reducing effectiveness---by use of a passive device. The latter takes advantage of the large characteristic spatial wavelength of the active method, which is a crucial aspect for the potential practical implementation on commercial aircraft. Imparting wall oscillations in the form of a standing wave $w_w = A_\textrm{SSL} \sin\left(\frac{2\pi}{\lambda_x}x\right)$ gives rise to a so-called Spatial Stokes Layer (SSL), resulting in a shear-strain layer, which induces a strong suppression of the near-wall turbulence, thereby leading to drag reduction. A skewed wavy wall described by $h_w = A_w \sin\left(\frac{2\pi}{\lambda_x}x + \frac{2\pi}{\lambda_z}z\right)$ is considered, so as to produce a shear-strain layer that is similar to that of the SSL in featuring the same streamwise wavelength $\lambda_x$. The main points of resemblance between the wavy wall and SSL are investigated, and then contrasted through the identification of significant differences. A reduced-order model is formulated to aid the DNS exploration of the parameter space (wave height, flow angle and wavelength) in the search for the optimal flow configuration. The validity of the assumptions made in the model, as well as its ability to predict the main flow properties, are examined by comparison to the DNS results. Arising from a DNS exploration of the 3D parameter space, a configuration yielding approximately 1\% drag reduction is identified. The response of the flow properties and the drag to variations in the wave height, flow angle and wavelength on the flow properties is reported and analysed. Major emphasis is placed on quantifying the influence of numerical accuracy on the predicted drag-reduction margin and on the computational efforts required to achieve this margin. Content Version: Open Access Issue Date: Apr-2018 Date Awarded: Sep-2018 URI: http://hdl.handle.net/10044/1/63827 Supervisor: Chernyshenko, SergeiLeschziner, Michael Sponsor/Funder: Innovate UKAirbus Industrie Funder's Grant Number: ALFET project (reference number 113022) Department: Aeronautics Publisher: Imperial College London Qualification Level: Doctoral Qualification Name: Doctor of Philosophy (PhD) Appears in Collections: Aeronautics PhD theses
{}
# Text Images ## What are text images Text images are just arrays of numbers stored in a tab-delimited file (https://en.wikipedia.org/wiki/Tab-separated_values), where each location in the file is a pixel and the value stored there is the pixel intensity. ## What are they good for? ### You might need to read them Many softwares (e.g. Microsoft excel) don’t support the saving of arrays as TIFF files. Many of these such softwares do support the saving of arrays as (tab-separated) text files. So, whether you like it or not, you might come across an image that was saved as a text file. You might not (if you’re lucky), but I have, so being able to read them is handy. Beware that ijtiff::read_txt_img() assumes a tab-separated file (so something else like a CSV file won’t work). This is the type of text image that you can save from ImageJ. ### You might (once in a million years) want to write them A 32-bit TIFF file can only hold values up to $$2^{32} - 1$$, that’s approximately $$4 \times 10^9$$. For whatever reason, this might not be enough for you, what if you want to write a value of $$10^{10}$$ to an image? Then, you’re out of luck with TIFF files (and most if not all other image formats), but a text image is your friend. Text images place no restriction on the values therein. They’re awkward and inefficient, but they can get you out of a hole sometimes.
{}