markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Para alguns casos específicos, pode ser interessante carregar um arquivo completo na memória. Para isso, usamos
with open(os.path.join(diretorio,"file1.txt"),"r") as arquivo: conteudo = arquivo.read() print(conteudo)
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Também podemos usar o comando readline:
with open(os.path.join(diretorio,"file1.txt"), "r") as arquivo: print(arquivo.readline())
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Sem argumentos, ele lê a próxima linha do arquivo; isto quer dizer que se ele é executado diversas vezes em sequência, com o arquivo aberto, ele lê a cada vez que é executado uma das linhas do arquivo.
with open(os.path.join(diretorio,"file1.txt"), "r") as arquivo: for i in range(0,5): print(arquivo.readline())
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Exemplo Ler a 10a linha do arquivo, de três maneiras diferentes:
with open(os.path.join(diretorio,"file1.txt"), "r") as arquivo: for i in range(0,10): linha = arquivo.readline() if i == 9: print(linha) with open(os.path.join(diretorio,"file1.txt"), "r") as arquivo: i = 0 for linha in arquivo: if i == 9: print(linha) ...
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Exemplo: Ler a primeira linha de cada arquivo de um diretorio e escrever o resultado em outro arquivo. Primeiro, usamos uma list comprehension para obtermos uma lista dos arquivos no diretorio em que estamos interessados, mas queremos excluir o arquivo teste.txt e queremos que os arquivos estejam listados com seu camin...
print([os.path.join(diretorio,item) for item in os.listdir(diretorio) if item != "teste.txt"]) lista = [os.path.join(diretorio,item) for item in os.listdir(diretorio) if item != "teste.txt"] lista
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Agora, vamos ler apenas a primeira linha de cada arquivo:
for item in lista: with open(item,"r") as arquivo: print(arquivo.readline()) with open("resumo.txt", "w") as arquivo_saida: for item in lista: with open(item,"r") as arquivo: arquivo_saida.write(arquivo.readline()+"\n")
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Agora, vamos desfazer o exemplo:
os.remove("resumo.txt")
Notebooks/Aula_3.ipynb
melissawm/oceanobiopython
gpl-3.0
Exercise: Ruiz Family La familia Ruiz recibe el periódico todas las mañanas, y lo coloca en el revistero después de leerlo. Cada tarde, con probabilidad 0.3, alguien coge todos los periódicos del revistero y los lleva al contenedor de papel. Por otro lado, si hay al menos 5 periódicos en el montón, el señor Ruiz...
transition_ruiz = np.array([[0.0, 1.0, 0.0, 0.0, 0.0], [0.3, 0.0, 0.7, 0.0, 0.0], [0.3, 0.0, 0.0, 0.7, 0.0], [0.3, 0.0, 0.0, 0.0, 0.7], [1.0, 0.0, 0.0, 0.0, 0.0]])
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
b) Si el domingo por la noche está vacío el revistero, ¿Cuál es la probabilidad de que haya 1 periódico el miércoles por la noche?
np.linalg.matrix_power(transition_ruiz, 4)
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
c) Calcula la probabilidad, a largo plazo, de que el revistero esté vacío una noche cualquiera.
stationary_distribution(transition_ruiz)
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.36
transition_36 = np.array([[ 0, 0, 1], [0.05, 0.95, 0], [ 0, 0.02, 0.98]])
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.36 a)
stationary_distribution(transition_36)
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.36 b)
1 / stationary_distribution(transition_36)
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.48
n = 12 transition_48 = np.zeros([n, n]) for i in range(n): transition_48[i, [(i - 1) % n, (i + 1) % n]] = [0.5] * 2 transition_48
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.48 a) La distribución estacionaria será: $$\pi_{i} = 1 / 12 \ \forall i \in {1, 2, ..., 12}$$ Por cumplir la matriz de transición la propiedad de ser doblemente estocástica (tanto filas como columnas suman la unidad). Por tanto, dado que se pide el número medio de pasos: $$ E_i(T_i) = \frac{1}{\pi_i} = \fra...
1 / stationary_distribution(transition_48)
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Exercise 1.48 b)
import numpy as np n = 100000 y = 0 d = 12 for n_temp in range(1, n + 1): visited = set() k = np.random.choice(range(d)) position = k s = str(position % d) + ' ' visited.add(position % d) position += np.random.choice([-1, 1]) s += str(position % d) + ' ' visited.add(position %...
numerical/math/stats/stochastic_processes/entrega-01.ipynb
garciparedes/python-examples
mpl-2.0
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
## Your code here from collections import Counter import random #threshold = 1e-5 #word_count = Counter(int_words) #total_count_words = len(int_words) #frequency = {word: count/total_count_words for word,count in word_count.items()} #p_drop = {word: 1 - np.sqrt(threshold/frequency[word]) for word, count in word_count....
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually l...
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' # Your code here #start_index = 0 #end_index = 0; #last_index = len(words)-1 #to_pick = random.randint(1,window_size+1) #if (idx-window_size) < 0: # start_index = 0 #if (idx+en...
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per inp...
def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size print('no of batches' + str(n_batches)) # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(wo...
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the n...
n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), minval=-1, maxval=1))# create embedding weight matrix here embed = tf.nn.embedding_lookup(embedding, ids=inputs) # use tf.nn.embedding_l...
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from th...
# Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here # Calculate the loss using neg...
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Training Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size,...
embeddings/Skip-Gram_word2vec.ipynb
VenkatRepaka/deep-learning
mit
Question 1 - Feature Observation As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood): - 'RM' is the average number of rooms among homes in the neighborhood. - 'LSTAT' is the percentage of homeowners in the neighborhood considered "...
fig = plt.figure() for plt_num, feature in enumerate(features): graphs = fig.add_subplot(2, 2, plt_num + 1) graphs.scatter(features[feature], prices) lin_reg = np.poly1d(np.polyfit(features[feature], prices, deg=1)) lin_x = np.linspace(features[feature].min(), features[feature].max(), 2) lin_y = lin...
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Answer: I see here only RM parameter increasing will lead to increase of MEDV. Increasing of the rest features will be likely degradate the MEDV. RM positively correlated with the price because the room number should be proportional to the dwelling area. Dwelling living area positively correlates with its price. LSTAT...
# TODO: Import 'r2_score' from sklearn.metrics import r2_score def performance_metric(y_true, y_predict): """ Calculates and returns the performance score between true and predicted values based on the metric chosen. """ # TODO: Calculate the performance score between 'y_true' and 'y_predict' ...
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Answer: Model has a coefficient of determination, R^2, of 0.923. The hypotesis correctly captured the variation of the target variable. The R_2 is high and variance y_true - y_predicted looks reasonably small Implementation: Shuffle and Split Data Your next implementation requires that you take the Boston housing datas...
# TODO: Import 'train_test_split' from sklearn import cross_validation # TODO: Shuffle and split the data into training and testing subsets X_train, X_test, y_train, y_test = cross_validation.train_test_split(features, prices, test_size=0.2, ...
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Question 3 - Training and Testing What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm? Hint: What could go wrong with not having a way to test your model? Answer: The benefit of splitting the training set is that we're able to pick a hypotesis based on mi...
# Produce learning curves for varying training set sizes and maximum depths vs.ModelLearning(features, prices)
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Question 4 - Learning the Data Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model? Hint: Are the learning curves converging to parti...
vs.ModelComplexity(X_train, y_train)
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Question 5 - Bias-Variance Tradeoff When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions? Hint: How do you know when a model is suffering fro...
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV' from sklearn.metrics import make_scorer from sklearn.model_selection import ShuffleSplit from sklearn.tree import DecisionTreeRegressor from sklearn.model_selection import GridSearchCV def fit_model(X, y): """ Performs grid search over the...
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Answer: Parameter 'max_depth' is 5 for the optimal model. My guess in Q6 was 5, what is close to the best max_depth from the grid search analysis. Question 10 - Predicting Selling Prices Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients tha...
# Produce a matrix for client data client_data = [[5, 17, 15], # Client 1 [4, 32, 22], # Client 2 [8, 3, 12]] # Client 3 # Show predictions for i, price in enumerate(reg.predict(client_data)): print "Predicted selling price for Client {}'s home: ${:,.2f}".format(i+1, price) print '\n...
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Answer: Predicted selling price for Client 1's home: $424,935.00. Predicted selling price for Client 2's home: $284,200.00 Predicted selling price for Client 3's home: $933,975.00 Client 1 has the price around the mean of the price distribution. RM and LSTAT is not that good, but it is compensted by PTRATIO, which is ...
vs.PredictTrials(features, prices, fit_model, client_data)
boston_housing/boston_housing.ipynb
ZhukovGreen/UMLND
gpl-3.0
Step 2: List available Datasets Now we can interact with NEXUS using the nexuscli python module. The nexuscli module has a number of useful methods that allow you to easily interact with the NEXUS webservice API. One of those methods is nexuscli.dataset_list which returns a list of Datasets in the system along with the...
# TODO: Import the nexuscli python module. # Target the nexus webapp server nexuscli.set_target("http://nexus-webapp:8083") # TODO: Call nexuscli.dataset_list() and print the results
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
dataplumber/nexus
apache-2.0
Step 3: Run a Time Series Now that we can interact with NEXUS using the nexuscli python module, we would like to run a time series. To do this, we will use the nexuscli.time_series method. The signature for this method is described below: nexuscli.time_series(datasets, bounding_box, start_datetime, end_datetime, spark...
import time import nexuscli from datetime import datetime from shapely.geometry import box # TODO: Create a bounding box using the box method imported above # TODO: Plot the bounding box using the helper method plot_box # Do not modify this line ## start = time.perf_counter()# ############################ # TODO...
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
dataplumber/nexus
apache-2.0
Step 3a: Run for a Longer Time Period Now that you have successfully generated a time series for approximately one year of data. Try generating a longer time series by increasing the end date to 2016-12-31. This will take a little bit longer to execute, since there is more data to analyze, but should finish in under a ...
import time import nexuscli from datetime import datetime from shapely.geometry import box bbox = box(-150, 40, -120, 55) plot_box(bbox) # Do not modify this line ## start = time.perf_counter()# ############################ # TODO: Call the time_series method for the AVHRR_OI_L4_GHRSST_NCEI dataset using # your bo...
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
dataplumber/nexus
apache-2.0
Step 4: Run two Time Series' and plot them side-by-side The time_series method can be used on up to two datasets at one time for comparison. Let's take a look at another region and see how to generate two time series and plot them side by side. Hurricane Katrina passed to the southwest of Florida on Aug 27, 2005. The ...
import time import nexuscli from datetime import datetime from shapely.geometry import box # TODO: Create a bounding box using the box method imported above # TODO: Plot the bounding box using the helper method plot_box # Do not modify this line ## start = time.perf_counter()# ############################ # TOD...
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
dataplumber/nexus
apache-2.0
Step 5: Run a Daily Difference Average (Anomaly) calculation Let's return to The Blob region. But this time we're going to use a different calculation, Daily Difference Average (aka. Anomaly plot). The Daily Difference Average algorithm compares a dataset against a climatological mean and produces a time series of the...
import time import nexuscli from datetime import datetime from shapely.geometry import box bbox = box(-150, 40, -120, 55) plot_box(bbox) # Do not modify this line ## start = time.perf_counter()# ############################ # TODO: Call the daily_difference_average method for the AVHRR_OI_L4_GHRSST_NCEI dataset us...
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
dataplumber/nexus
apache-2.0
A variety of bijective transformations live in the pyro.distributions.transforms module, and the classes to define transformed distributions live in pyro.distributions. We first create the base distribution of $X$ and the class encapsulating the transform $\text{exp}(\cdot)$:
dist_x = dist.Normal(torch.zeros(1), torch.ones(1)) exp_transform = T.ExpTransform()
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
The class ExpTransform derives from Transform and defines the forward, inverse, and log-absolute-derivative operations for this transform, \begin{align} g(x) &= \text{exp(x)}\ g^{-1}(y) &= \log(y)\ \log\left(\left|\frac{dg}{dx}\right|\right) &= y. \end{align} In general, a transform class defines these three operations...
dist_y = dist.TransformedDistribution(dist_x, [exp_transform])
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Now, plotting samples from both to verify that we that have produced the log-normal distribution:
plt.subplot(1, 2, 1) plt.hist(dist_x.sample([1000]).numpy(), bins=50) plt.title('Standard Normal') plt.subplot(1, 2, 2) plt.hist(dist_y.sample([1000]).numpy(), bins=50) plt.title('Standard Log-Normal') plt.show()
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Our example uses a single transform. However, we can compose transforms to produce more expressive distributions. For instance, if we apply an affine transformation we can produce the general log-normal distribution, \begin{align} X &\sim \mathcal{N}(0,1)\ Y &= \text{exp}(\mu+\sigma X). \end{align} or rather, $Y\sim\te...
dist_x = dist.Normal(torch.zeros(1), torch.ones(1)) affine_transform = T.AffineTransform(loc=3, scale=0.5) exp_transform = T.ExpTransform() dist_y = dist.TransformedDistribution(dist_x, [affine_transform, exp_transform]) plt.subplot(1, 2, 1) plt.hist(dist_x.sample([1000]).numpy(), bins=50) plt.title('Standard Normal')...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
For the forward operation, transformations are applied in the order of the list that is the second argument to TransformedDistribution. In this case, first AffineTransform is applied to the base distribution and then ExpTransform. Learnable Univariate Distributions in Pyro Having introduced the interface for invertible...
import numpy as np from sklearn import datasets from sklearn.preprocessing import StandardScaler n_samples = 1000 X, y = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05) X = StandardScaler().fit_transform(X) plt.title(r'Samples from $p(x_1,x_2)$') plt.xlabel(r'$x_1$') plt.ylabel(r'$x_2$') plt.scatte...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Standard transforms derive from the Transform class and are not designed to contain learnable parameters. Learnable transforms, on the other hand, derive from TransformModule, which is a torch.nn.Module and registers parameters with the object. We will learn the marginals of the above distribution using such a transfor...
base_dist = dist.Normal(torch.zeros(2), torch.ones(2)) spline_transform = T.Spline(2, count_bins=16) flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
This transform passes each dimension of its input through a separate monotonically increasing function known as a spline. From a high-level, a spline is a complex parametrizable curve for which we can define specific points known as knots that it passes through and the derivatives at the knots. The knots and their deri...
%%time steps = 1 if smoke_test else 1001 dataset = torch.tensor(X, dtype=torch.float) optimizer = torch.optim.Adam(spline_transform.parameters(), lr=1e-2) for step in range(steps): optimizer.zero_grad() loss = -flow_dist.log_prob(dataset).mean() loss.backward() optimizer.step() flow_dist.clear_cache...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Note that we call flow_dist.clear_cache() after each optimization step to clear the transform's forward-inverse cache. This is required because flow_dist's spline_transform is a stateful TransformModule rather than a purely stateless Transform object. Purely functional Pyro code typically creates Transform objects each...
X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy() plt.title(r'Joint Distribution') plt.xlabel(r'$x_1$') plt.ylabel(r'$x_2$') plt.scatter(X[:,0], X[:,1], label='data', alpha=0.5) plt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5) plt.legend() plt.show() plt.subplot(1, 2, 1) ...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
As we can see, we have learnt close approximations to the marginal distributions, $p(x_1),p(x_2)$. It would have been challenging to fit the irregularly shaped marginals with standard methods, e.g., a mixture of normal distributions. As expected, since there is a dependency between the two dimensions, we do not learn a...
base_dist = dist.Normal(torch.zeros(2), torch.ones(2)) spline_transform = T.spline_coupling(2, count_bins=16) flow_dist = dist.TransformedDistribution(base_dist, [spline_transform])
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Similarly to before, we train this distribution on the toy dataset and plot the results:
%%time steps = 1 if smoke_test else 5001 dataset = torch.tensor(X, dtype=torch.float) optimizer = torch.optim.Adam(spline_transform.parameters(), lr=5e-3) for step in range(steps+1): optimizer.zero_grad() loss = -flow_dist.log_prob(dataset).mean() loss.backward() optimizer.step() flow_dist.clear_cac...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
We see from the output that this normalizing flow has successfully learnt both the univariate marginals and the bivariate distribution. Conditional versus Joint Distributions Background In many cases, we wish to represent conditional rather than joint distributions. For instance, in performing variational inference, th...
dist_base = dist.Normal(torch.zeros(1), torch.ones(1)) x1_transform = T.spline(1) dist_x1 = dist.TransformedDistribution(dist_base, [x1_transform])
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
A conditional transformed distribution is created by passing the base distribution and list of conditional and non-conditional transforms to the ConditionalTransformedDistribution class:
x2_transform = T.conditional_spline(1, context_dim=1) dist_x2_given_x1 = dist.ConditionalTransformedDistribution(dist_base, [x2_transform])
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
You will notice that we pass the dimension of the context variable, $M=1$, to the conditional spline helper function. Until we condition on a value of $x_1$, the ConditionalTransformedDistribution object is merely a placeholder and cannot be used for sampling or scoring. By calling its .condition(context) method, we ob...
x1 = torch.ones(1) print(dist_x2_given_x1.condition(x1).sample())
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
In general, the context variable may have batch dimensions and these dimensions must broadcast over the batch dimensions of the input variable. Now, combining the two distributions and training it on the toy dataset:
%%time steps = 1 if smoke_test else 5001 modules = torch.nn.ModuleList([x1_transform, x2_transform]) optimizer = torch.optim.Adam(modules.parameters(), lr=3e-3) x1 = dataset[:,0][:,None] x2 = dataset[:,1][:,None] for step in range(steps): optimizer.zero_grad() ln_p_x1 = dist_x1.log_prob(x1) ln_p_x2_given_x1...
tutorial/source/normalizing_flows_i.ipynb
uber/pyro
apache-2.0
Algorithm: Ordered Dict. Use an ordered dict to track insertion order of each key Flatten list of values. Complexity: Time: O(n) Space: O(n) Code: Ordered Dict
from collections import OrderedDict def group_ordered_alt(list_in): if list_in is None: return None result = OrderedDict() for value in list_in: result.setdefault(value, []).append(value) return [v for group in result.values() for v in group]
interactive-coding-challenges/staging/sorting_searching/group_ordered/group_ordered_solution.ipynb
saashimi/code_guild
mit
Unit Test The following unit test is expected to fail until you solve the challenge.
%%writefile test_group_ordered.py from nose.tools import assert_equal class TestGroupOrdered(object): def test_group_ordered(self, func): assert_equal(func(None), None) print('Success: ' + func.__name__ + " None case.") assert_equal(func([]), []) print('Success: ' + func.__name__ ...
interactive-coding-challenges/staging/sorting_searching/group_ordered/group_ordered_solution.ipynb
saashimi/code_guild
mit
Read in differential expression results as a Pandas data frame to get differentially expressed gene list
#Read in DESeq2 results genes=pandas.read_csv("DE_genes.csv") #View top of file genes.head(10) #Extract genes that are differentially expressed with a pvalue less than a certain cutoff (pvalue < 0.05 or padj < 0.05) genes_DE_only = genes.loc[(genes.pvalue < 0.05)] #View top of file genes_DE_only.head(10) #Check how...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Translate Ensembl IDs to Gene Symbols and Entrez IDs using mygene.info API
#Extract list of DE genes (Check to make sure this code works, this was adapted from a different notebook) de_list = genes_DE_only[genes_DE_only.columns[0]] #Remove .* from end of Ensembl ID de_list2 = de_list.replace("\.\d","",regex=True) #Add new column with reformatted Ensembl IDs genes_DE_only["Full_Ensembl"] = ...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Run ToppGene API Include path for the input .xml file and path and name of the output .xml file. Outputs all 17 features of ToppGene.
!curl -v -H 'Content-Type: text/xml' --data @/data/test/test.xml -X POST https://toppgene.cchmc.org/api/44009585-27C5-41FD-8279-A5FE1C86C8DB > /data/test/testoutfile.xml #Display output .xml file import xml.dom.minidom xml = xml.dom.minidom.parse("/data/test/testoutfile.xml") pretty_xml_as_string = xml.toprettyxml...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Parse ToppGene results into Pandas data frame
import xml.dom.minidom import pandas as pd import numpy #Parse through .xml file def load_parse_xml(data_file): """Check if file exists. If file exists, load and parse the data file. """ if os.path.isfile(data_file): print "File exists. Parsing..." data_parse = ET.Elemen...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Display the dataframe of each ToppGene feature
#Dataframe for GeneOntologyMolecularFunction df.loc[df['ToppGene Feature'] == 'GeneOntologyMolecularFunction'] #Dataframe for GeneOntologyBiologicalProcess df.loc[df['ToppGene Feature'] == 'GeneOntologyBiologicalProcess'] #Dataframe for GeneOntologyCellularComponent df.loc[df['ToppGene Feature'] == 'GeneOntologyCellu...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Extract the KEGG pathway IDs from the ToppGene output (write to csv file)
#Number of significant KEGG pathways total_KEGG_pathways = df.loc[df['Source'] == 'BioSystems: KEGG'] print "Number of significant KEGG pathways: " + str(len(total_KEGG_pathways.index)) df = df.loc[df['Source'] == 'BioSystems: KEGG'] df.to_csv('/data/test/keggpathways.csv', index=False) mapping_df = pandas.read_csv('...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Create dataframe that includes the KEGG IDs that correspond to the significant pathways outputted by ToppGene
#Create array of KEGG IDs that correspond to the significant pathways outputted by ToppGene KEGG_ID_array = [] for ID in df.ix[:,2]: x = int(ID) for index,BSID in enumerate(mapping_df.ix[:,0]): y = int(BSID) if x == y: KEGG_ID_array.append(mapping_df.get_value(index,1,takeable=...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Run Pathview to map and render user data on the pathway graphs outputted by ToppGene Switch to R kernel here
#Set working directory working_dir <- "/data/test" setwd(working_dir) date <- Sys.Date() #Set R options options(jupyter.plot_mimetypes = 'image/png') options(useHTTPS=FALSE) options(scipen=500) #Load R packages from CRAN and Bioconductor require(limma) require(edgeR) require(DESeq2) require(RColorBrewer) require(clu...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Create matrix-like structure to contain entrez ID and log2FC for gene.data input
#Extract entrez ID and log2FC from the input DE genes #Read in differential expression results as a Pandas data frame to get differentially expressed gene list #Read in DE_genes_converted results (generated in jupyter notebook) genes <- read.csv("DE_genes_converted.csv")[,c('entrezgene', 'log2FoldChange')] #Remove NA ...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Create vector containing the KEGG IDs of all the significant target pathways
#Read in pathways that you want to map to (from toppgene pathway results) #Store as a vector pathways <- read.csv("/data/test/keggidlist.csv") head(pathways, 12) pathways.vector<-as.vector(pathways$KEGG.ID) pathways.vector #Loop through all the pathways in pathways.vector #Generate Pathview pathways for each one (nati...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Display each of the signficant pathway colored overlay diagrams Switch back to py27 kernel here
#Display native KEGG graphs import matplotlib.image as mpimg import matplotlib.pyplot as plt import pandas %matplotlib inline #for loop that iterates through the pathway images and displays them pathways = pandas.read_csv("/data/test/keggidlist.csv") pathways for i in pathways.ix[:,0]: image = i address...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
Weijun Luo and Cory Brouwer. Pathview: an R/Bioconductor package for pathway-based data integration and visualization. Bioinformatics, 29(14):1830-1831, 2013. doi: 10.1093/bioinformatics/btt285. Implement KEGG_pathway_vis Jupyter Notebook (by L. Huang) Only works for one pathway (first one)
#Import more python modules import sys #To access visJS_module and entrez_to_symbol module sys.path.append(os.getcwd().replace('/data/test', '/data/CCBB_internal/interns/Lilith/PathwayViz')) import visJS_module from ensembl_to_entrez import entrez_to_symbol import networkx as nx import matplotlib.pyplot as plt imp...
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
ucsd-ccbb/jupyter-genomics
mit
The miracle that makes modular arithmetic work is that the end-result of a computation "mod m" is not changed if one works "mod m" along the way. At least this is true if the computation only involves addition, subtraction, and multiplication.
((17 + 38) * (105 - 193)) % 13 # Do a bunch of stuff, then take the representative modulo 13. (((17%13) + (38%13)) * ((105%13) - (193%13)) ) % 13 # Working modulo 13 along the way.
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
It might seem tedious to carry out this "reduction mod m" at every step along the way. But the advantage is that you never have to work with numbers much bigger than the modulus (m) if you can reduce modulo m at each step. For example, consider the following computation.
(3**999) % 1000 # What are the last 3 digits of 3 raised to the 999 power?
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The result will probably have the letter "L" at the end, indicating that Python switched into "long-integer" mode along the way. Indeed, the computation asked Python to first raise 3 to the 999 power (a big number!) and then compute the remainder after division by 1000 (the last 3 digits). But what if we could reduce ...
P = 1 # The "running product" starts at 1. for i in range(999): # We repeat the following line 999 times, as i traverses the list [0,1,...,998]. P = (P * 3)%1000 # We reduce modulo 1000 along the way! print P
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The result of this computation should not have the letter "L" at the end, because Python never had to work with long integers. Computations with long integers are time-consuming, and unnecessary if you only care about the result of a computation modulo a small number m. Performance analysis The above loop works quickl...
def powermod_1(base, exponent, modulus): # The naive approach. return (base**exponent) % modulus def powermod_2(base, exponent, modulus): P = 1 # Start the running product at 1. e = 0 while e < exponent: # The while loop saves memory, relative to a for loop, by avoiding the storage of a list. ...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now let's compare the performance of these two functions. It's also good to double check the code in powermod_2 and run it to check the results. The reason is that loops like the while loop above are classic sources of Off by One Errors. Should e start at zero or one? Should the while loop have the condition e &lt;...
%timeit powermod_1(3,999,1000) %timeit powermod_2(3,999,1000)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The second powermod function was probably much slower, even though we reduced the size of the numbers along the way. But perhaps we just chose some input parameters (3,999,1000) which were inconvenient for the second function. To compare the performance of the two functions, it would be useful to try many different i...
import timeit as TI TI.timeit('powermod_1(3,999,1000)', "from __main__ import powermod_1", number=10000) TI.timeit('powermod_2(3,999,1000)', "from __main__ import powermod_2", number=10000)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The syntax of the timeit function is a bit challenging. The first parameter is the Python code which we are timing (as a string), in this case powermod_*(3,999,1000). The second parameter probably looks strange. It exists because the timeit function sets up a little isolation chamber to run the code -- within this i...
from random import randint # randint chooses random integers. print "My number is ",randint(1,10) # Run this line many times over to see what happens!
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The randint(a,b) command chooses a random integer between a and b, inclusive! Unlike the range(a,b) command which iterates from a to b-1, the randint command includes both a and b as possibilities. The following lines iterate the randint(1,10) and keep track of how often each output occurs. The resulting frequency d...
Freq = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0, 10:0} # We prefer a dictionary here. for t in range(10000): n = randint(1,10) # Choose a random number between 1 and 10. Freq[n] = Freq[n] + 1 print Freq
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
For fun, and as a template for other explorations, we plot the frequencies in a histogram.
%matplotlib inline import matplotlib.pyplot as plt plt.bar(Freq.keys(), Freq.values()) # The keys 1,...,10 are used as bins. The values are used as bar heights. plt.show()
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Putting together the randint function and the timeit function, we can compare the performance of powermod_1 and powermod_2 when given random inputs.
time_1 = 0 # Tracking the time taken by the powermod_1 function. time_2 = 0 # Tracking the time taken by the powermod_2 function. for t in range(1000): # One thousand samples are taken! base = randint(10,99) # A random 2-digit base. exponent = randint(1000,1999) # A random 3-digit exponent. modulus = randin...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Now we can be pretty sure that the powermod_1 function is faster (perhaps by a factor of 8-10) than the powermod_2 function we designed. At least, this is the case for inputs in the 2-3 digit range that we sampled. But why? We reduced the complexity of the calculation by using the mod operation % throughout. Here a...
%timeit 1238712 % 1237
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
So the speed difference is not due to the number of mod operations. But the other issues are relevant. The Python developers have worked hard to make it run fast -- built-in operations like ** will almost certainly be faster than any function that you write with a loop in Python. The developers have written programs...
pow(3,999) # A long number. pow(3,999,1000) # Note that there's no L at the end! pow(3,999) % 1000 # The old way %timeit pow(3,999,1000) %timeit pow(3,999) % 1000
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The pow(b,e,m) command should give a significant speedup, as compared to the pow(b,e) command. Remember that ns stands for nanoseconds! Exponentiation runs so quickly because not only is Python reducing modulo m along the way, it is performing a surprisingly small number of multiplications. In our loop approach, we c...
pow(3,36,37) # a = 3, p = 37, p-1 = 36 pow(17,100,101) # 101 is prime. pow(303, 100, 101) # Why won't we get 1? pow(5,90,91) # What's the answer? pow(7,12318, 12319) # What's the answer?
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We can learn something from the previous two examples. Namely, 91 and 12319 are not prime numbers. We say that 7 witnesses the non-primality of 12319. Moreover, we learned this fact without actually finding a factor of 12319! Indeed, the factors of 12319 are 97 and 127, which have no relationship to the "witness" 7...
pow(3,90,91)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Well, ok. Sometimes coincidences happen. We say that 3 is a bad witness for 91, since 91 is not prime, but $3^{90} \equiv 1$ mod $91$. But we could try multiple bases (witnesses). We can expect that someone (some base) will witness the nonprimality. Indeed, for the non-prime 91 there are many good witnesses (ones ...
for witness in range(1,20): flt = pow(witness, 90, 91) if flt == 1: print "%d is a bad witness."%(witness) else: print "%d raised to the 90th power equals %d, mod 91"%(witness, flt)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
For some numbers -- the Carmichael numbers -- there are more bad witnesses than good witnesses. For example, take the Carmichael number 41041, which is not prime ($41041 = 7 \cdot 11 \cdot 13 \cdot 41$).
for witness in range(1,20): flt = pow(witness, 41040, 41041) if flt == 1: print "%d is a bad witness."%(witness) else: print "%d raised to the 41040th power equals %d, mod 41041"%(witness, flt)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
For Carmichael numbers, it turns out that finding a good witness is just as difficult as finding a factor. Although Carmichael numbers are rare, they demonstrate that Fermat's Little Theorem by itself is not a great way to be certain of primality. Effectively, Fermat's Little Theorem can often be used to quickly prov...
for x in range(41): if x*x % 41 == 1: print "%d squared is congruent to 1, mod 41."%(x) # What numbers do you think will be printed?
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Note that we use "natural representatives" when doing modular arithmetic in Python. So the only numbers whose square is 1 mod 41 are 1 and 40. (Note that 40 is the natural representative of -1, mod 41). If we consider the "square roots of 1" with a composite modulus, we find more (as long as the modulus has at least...
for x in range(91): if x*x % 91 == 1: print "%d squared is congruent to 1, mod 91."%(x) # What numbers do you think will be printed?
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We have described two properties of prime numbers, and therefore two possible indicators that a number is not prime. If $p$ is a number which violates Fermat's Little Theorem, then $p$ is not prime. If $p$ is a number which violates the ROO property, then $p$ is not prime. The Miller Rabin test will combine these...
def Pingala(e): current_number = e while current_number > 0: if current_number%2 == 0: current_number = current_number / 2 print "Exponent %d BIT 0"%(current_number) if current_number%2 == 1: current_number = (current_number - 1) / 2 print "Exponen...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The codes "BIT 1" and "BIT 0" tell us what happened at each step, and allow the process to be reversed. In a line with BIT 0, the exponent gets doubled as one goes up one line (e.g., from 11 to 22). In a line with BIT 1, the exponent gets doubled then increased by 1 as one goes up one line (e.g., from 2 to 5). We can...
n = 1 # This is where we start. n = n*n * 5 # BIT 1 is interpreted as square-then-multiply-by-5, since the exponent is doubled then increased by 1. n = n*n # BIT 0 is interpreted as squaring, since the exponent is doubled. n = n*n * 5 # BIT 1 n = n*n * 5 # BIT 1 again. n = n*n # BIT 0 n = n*n * 5 # BIT 1 n = n*n # BI...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Note that along the way, we carried out 11 multiplications (count the * symbols), and didn't have to remember too many numbers along the way. So this process was efficient in both time and memory. We just followed the BIT code. The number of multiplications is bounded by twice the number of BITs, since each BIT requ...
bin(90) # Compare this to the sequence of bits, from bottom up.
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Python outputs binary expansions as strings, beginning with '0b'. To summarize, we can compute an exponent like $b^e$ by the following process: Pingala's Exponentiation Algorithm Set the number to 1. Read the bits of $e$, from left to right. a. When the bit is zero, square the number. b. When the bit is o...
def pow_Pingala(base,exponent): result = 1 bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent for bit in bitstring: # Iterates through the "letters" of the string. Here the letters are '0' or '1'. if bit == '0': result = result*result if b...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
It is straightforward to modify Pingala's algorithm to compute exponents in modular arithmetic. Just reduce along the way.
def powmod_Pingala(base,exponent,modulus): result = 1 bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent for bit in bitstring: # Iterates through the "letters" of the string. Here the letters are '0' or '1'. if bit == '0': result = (result*result)...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Let's compare the performance of our new modular exponentiation algorithm.
%timeit powmod_Pingala(3,999,1000) # Pingala's algorithm, modding along the way. %timeit powermod_1(3,999,1000) # Raise to the power, then mod, using Python built-in exponents. %timeit powermod_2(3,999,1000) # Multiply 999 times, modding along the way. %timeit pow(3,999,1000) # Use the Python built-in modular ...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The fully built-in modular exponentiation pow(b,e,m) command is probably the fastest. But our implementation of Pingala's algorithm isn't bad -- it probably beats the simple (b**e) % m command (in the powermod_1 function), and it's certainly faster than our naive loop in powermod_2. One can quantify the efficiency o...
def powmod_verbose(base, exponent, modulus): result = 1 print "Computing %d raised to %d, modulo %d."%(base, exponent, modulus) print "The current number is %d"%(result) bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent for bit in bitstring: # Iterates throug...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The function has displayed every step in Pingala's algorithm. The final result is that $2^{560} \equiv 1$ mod $561$. So in this sense, $2$ is a bad witness. For $561$ is not prime (3 is a factor), but it does not violate Fermat's Little Theorem when $2$ is the base. But within the verbose output above, there is a vi...
def Miller_Rabin(p, base): ''' Tests whether p is prime, using the given base. The result False implies that p is definitely not prime. The result True implies that p **might** be prime. It is not a perfect test! ''' result = 1 exponent = p-1 modulus = p bitstring = bin(exponent)...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
How good is the Miller-Rabin test? Will this modest improvement (looking for ROO violations) improve the reliability of witnesses? Let's see how many witnesses observe the nonprimality of 41041.
for witness in range(2,20): MR = Miller_Rabin(41041, witness) # if MR: print "%d is a bad witness."%(witness) else: print "%d detects that 41041 is not prime."%(witness)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
In fact, one can prove that at least 3/4 of the witnesses will detect the non-primality of any non-prime. Thus, if you keep on asking witnesses at random, your chances of detecting non-primality increase exponentially! In fact, the witness 2 suffices to check whether any number is prime or not up to 2047. In other w...
from mpmath import * # The mpmath package allows us to compute with arbitrary precision! # It has specialized functions for log, sin, exp, etc.., with arbitrary precision. # It is probably installed with your version of Python. def prob_prime(N, witnesses): ''' Conservatively estimates the probability of prima...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
We implement the Miller-Rabin test for primality in the is_prime function below.
def is_prime(p, witnesses=50): # witnesses is a parameter with a default value. ''' Tests whether a positive integer p is prime. For p < 2^64, the test is deterministic, using known good witnesses. Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test, based on research b...
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
How fast is our new is_prime function? Let's give it a try.
%timeit is_prime(234987928347928347928347928734987398792837491) %timeit is_prime(1000000000000066600000000000001)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
The results will probably on the order of a millisecond, perhaps even a tenth of a millisecond ($10^{-4}$ seconds) for non-primes! That's much faster than looking for factors, for numbers of this size. In this way, we can test primality of numbers of hundreds of digits! For an application, let's find some Mersenne pr...
for p in range(1,1000): if is_prime(p): # We only need to check these p. M = 2**p - 1 # A candidate for a Mersenne prime. if is_prime(M): print "2^%d - 1 = %d is a Mersenne prime."%(p,M)
PwNT Notebook 5.ipynb
MartyWeissman/Python-for-number-theory
gpl-3.0
Load config from default location
config.load_kube_config() client.configuration.assert_hostname = False
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Create API endpoint instance and API resource instances
api_instance = client.CoreV1Api() sec = client.V1Secret()
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Fill required Secret fields
sec.metadata = client.V1ObjectMeta(name="mysecret") sec.type = "Opaque" sec.data = {"username": "bXl1c2VybmFtZQ==", "password": "bXlwYXNzd29yZA=="}
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Create Secret
api_instance.create_namespaced_secret(namespace="default", body=sec)
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Create test Pod API resource instances
pod = client.V1Pod() spec = client.V1PodSpec() pod.metadata = client.V1ObjectMeta(name="mypod") container = client.V1Container() container.name = "mypod" container.image = "redis"
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0
Add volumeMount which would be used to hold secret
volume_mounts = [client.V1VolumeMount()] volume_mounts[0].mount_path = "/data/redis" volume_mounts[0].name = "foo" container.volume_mounts = volume_mounts
examples/notebooks/create_secret.ipynb
kubernetes-client/python
apache-2.0