markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
a=34b=80average=(34+80)/2 please keep number under bracket cause if you do wihtout a giving that than your coding may break the exact answer try this .print(average)
# write a program find the average between two number .
74.0
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
p=34+80/2print(p)
# 05 write a program to to calculate the square of a number entered by the user
1058
Apache-2.0
codeMania-python-begginer/02_Varibles-and-datatypes.ipynb
JayramMardi/codeMania
Deep Convolutional GANsIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the [origina...
%matplotlib inline import pickle as pkl import matplotlib.pyplot as plt import numpy as np from scipy.io import loadmat import tensorflow as tf !mkdir data
mkdir: cannot create directory ‘data’: File exists
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Getting the dataHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.
from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm data_dir = 'data/' if not isdir(data_dir): raise Exception("Data directory doesn't exist!") class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
These SVHN files are `.mat` files typically used with Matlab. However, we can load them in with `scipy.io.loadmat` which we imported above.
trainset = loadmat(data_dir + 'train_32x32.mat') testset = loadmat(data_dir + 'test_32x32.mat')
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
idx = np.random.randint(0, trainset['X'].shape[3], size=36) fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),) for ii, ax in zip(idx, axes.flatten()): ax.imshow(trainset['X'][:,:,:,ii], aspect='equal') ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) plt.subplots_adjust(wspace=0...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're tr...
def scale(x, feature_range=(-1, 1)): # scale to (0, 1) x = ((x - x.min())/(255 - x.min())) # scale to feature_range min, max = feature_range x = x * (max - min) + min return x class Dataset: def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None): split_idx...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Network InputsHere, just creating some placeholders like normal.
def model_inputs(real_dim, z_dim): inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real') inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z') return inputs_real, inputs_z
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
GeneratorHere you'll build the generator network. The input will be our noise vector `z` as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.What's new here is we'll use convolutional layers to create our new images. The first layer is a ful...
def generator(z, output_dim, reuse=False, alpha=0.2, training=True): with tf.variable_scope('generator', reuse=reuse): # First fully connected layer x1 = tf.layers.dense(z, 4*4*512) # Reshape it to start the convolutional stack x1 = tf.reshape(x1, (-1, 4, 4, 512)) x1 = tf.lay...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
DiscriminatorHere you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll n...
def discriminator(x, reuse=False, alpha=0.2): with tf.variable_scope('discriminator', reuse=reuse): # Input layer is 32x32x3 x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same') relu1 = tf.maximum(alpha * x1, x1) # 16x16x64 x2 = tf.layers.conv2d(relu1, 128, 5, ...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Model LossCalculating the loss like before, nothing new here.
def model_loss(input_real, input_z, output_dim, alpha=0.2): """ Get the loss for the discriminator and generator :param input_real: Images from the real dataset :param input_z: Z input :param out_channel_dim: The number of channels in the output image :return: A tuple of (discriminator loss, gen...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
OptimizersNot much new here, but notice how the train operations are wrapped in a `with tf.control_dependencies` block so the batch normalization layers can update their population statistics.
def model_opt(d_loss, g_loss, learning_rate, beta1): """ Get optimization operations :param d_loss: Discriminator loss Tensor :param g_loss: Generator loss Tensor :param learning_rate: Learning Rate Placeholder :param beta1: The exponential decay rate for the 1st moment in the optimizer :ret...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Building the modelHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
class GAN: def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5): tf.reset_default_graph() self.input_real, self.input_z = model_inputs(real_size, z_size) self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z, ...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Here is a function for displaying generated images.
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)): fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): ax.axis('off') img = ((img - img.min())*255 / (img.max() ...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
And another function we can use to train our network. Notice when we call `generator` to create the samples to display, we set `training` to `False`. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the `net.input_real` placeholder whe...
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)): saver = tf.train.Saver() sample_z = np.random.uniform(-1, 1, size=(72, z_size)) samples, losses = [], [] steps = 0 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for ...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
HyperparametersGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read [the DCGAN paper](https://arxiv.org/pdf/1511.06434.pdf) to see what worked for th...
real_size = (32,32,3) z_size = 100 learning_rate = 0.0002 batch_size = 128 epochs = 25 alpha = 0.2 beta1 = 0.5 # Create the network net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1) dataset = Dataset(trainset, testset) losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5)) fig, ...
_____no_output_____
MIT
dcgan-svhn/DCGAN.ipynb
lucasosouza/udacity-deeplearning-full
Final Exam Problem Statement 1
n=0 for i in range(0,10): a=int(input("Enter a number: ")) while a>5: print("The number you entered is greater than 5, please enter another number") a=int(input("Enter a number: ")) n+=a print(n)
Enter a number: 8 The number you entered is greater than 5, please enter another number Enter a number: 1 Enter a number: 2 Enter a number: 3 Enter a number: 4 Enter a number: 5 Enter a number: 0 Enter a number: -1 Enter a number: -6 Enter a number: -2 Enter a number: -1 5
Apache-2.0
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
Problem Statement 2
n=1 sum=0 print("Enter 5 numbers: ") while n<=5: number=int(input("")) if n==1 or n==5: sum=sum+number n=n+1 print("The sum of first and last number entered is", sum)
Enter 5 numbers: 5 6 7 8 9 The sum of first and last number entered is 14
Apache-2.0
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
Problem Statement 3
x = int(input("Enter Grade: ")) if x>=90: print("Grade = A") elif x>=80 and x<90: print("Grade = B") elif x>=70 and x<80: print("Grade = C") elif x>=60 and x<70: print("Grade = D") else: print("Grade = F")
Enter Grade: 68 Grade = D
Apache-2.0
Final_Exam.ipynb
JohnLouie16/CPEN-21A-ECE-2-2
Numbers
type(2) type(2.0) 2 + 2 # addition 2 - 2 # subtraction 2 * 2 # multiplication 2 / 2 # float division 2 // 2 # integer division 2 ** 3 # exponents 2 ** 0.5 # square root 5 % 2 # modulo operator (calculates the remainder) int(2.0) # conversion to an integer int(2.1) # this rounds down round(2.11) # rounds to t...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Strings
'a string' "a string" print(""" multi- line string """) 'a' + 'string' # concatenate strings 'a' * 2 # repeat strings str(2) # convert a number to a string # raw string -- the last backslash must be escaped with another backslash if it is at the end r'C:\Users\Me\A folder\\' r'C:\Users\Me\A folder\a file.txt'
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
String indexing
'a string'[0] # first character of a string 'a string'[-1] # last character of a string 'a string'[0:4] # index a string to get first 4 characters 'a string'[:4] # index a string to get first 4 characters 'a string'[::2] # get every other letter 'a string'[::-1] # reverse the string 'a string'[:5:2] # every othe...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Built-in string methods
'-'.join(['this', 'is', 'a', 'test']) 'this is a test'.split() '\t\n - remove left'.lstrip() # remove whitespace on the left '\t\n - remove left'.rstrip() # remove whitespace on the right 'testtest - remove left'.lstrip('test') # remove all instances of 'test' from the left of the sting 'testtest - remove left'....
- tabs and newlines
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Variables
books = 1 books # print out our variable books = books + 1 books books += 1 books books -= 1 books books *= 2 books books /= 2 books books **= 2 books books %= 2 books # concatenate two string variables a = 'string 1' b = 'another string' a + b # check variable type type(a) # don't do this! # type = 'test' # type(a) ...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Lists, Tuples, Sets, and Dictionaries
# a basic list [1, 2, 3] # lists can contain different data types [1, 'a', 3] # lists can contain other lists [1, [1, 2, 3], 3] # join lists [1, 2, 3] + [4, 5] # repeat a list [1, 2, 3] * 2 # get the length of a list len([1, 2, 3]) # make a blank list and add the element '1' to it a_list = [] a_list.append(1) a_list # ...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Tuples
a_tuple = (2, 3) a_tuple tuple(a_list)
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Sets
set(a_list) a_set = {1, 2, 3, 3} a_set set_1 = {1, 2, 3} set_2 = {2, 3, 4} set_1.union(set_2) set_1 | set_2 set_1.difference(set_2) # shorthand for different operator set_1 - set_2
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Dictionaries
a_dict = {'books': 1, 'magazines': 2, 'articles': 7} a_dict a_dict['books'] another_dict = {'movies': 4} a_dict | another_dict a_dict['shows'] = 12 a_dict
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Loops and Comprehensions
a_list = [1, 2, 3] for element in a_list: print(element) a_list = [1, 2, 3] for index in range(len(a_list)): print(index)
0 1 2
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
This brings up the documentation for a function.
?range a_list = [1, 2, 3] for index, element in enumerate(a_list): print(index, element) a_list = [] for i in range(3): a_list.append(i) a_list # a list comprehension a_list = [i for i in range(3)] a_list a_dict = {'books': 1, 'magazines': 2, 'articles': 7} for key, value in a_dict.items(): print(f'{key}:{...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Booleans and Conditionals
books_read = 11 books_read > 10 none_var = None none_var is None books_read = 12 if books_read < 10: print("You have only read a few books.") elif books_read >= 12: print("You've read lots of books!") else: print("You've read 10 or 11 books.") a = 'test' type(a) is str type(a) is not str 'st' in 'a string' ...
is false
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Libraries and Imports
import time time.time() import time as t t.time() import urllib.request urllib.request.urlopen('https://www.pypi.org') from urllib.request import urlopen urlopen('https://www.pypi.org') # importing a function from a subpackage of a library, and aliasing it from urllib.request import urlopen as uo uo('https://www.pypi.o...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Functions
def test_function(doPrint, printAdd='more'): """ A demo function. """ if doPrint: print('test' + printAdd) return printAdd value = test_function(True) print(value) # brings up documentation for sorted() ?sorted a_list = [2, 4, 1] sorted(a_list, reverse=True) def test_function(): """ ...
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Classes
class testObject: def __init__(self, attr): self.test_attribute = attr def test_function(self): print('testing123') print(f'testing{self.test_attribute}') to = testObject(123) to.test_attribute to.test_function()
testing123 testing123
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Here is another module from core Python.
import calendar # creates a new instance of a Calendar object c = calendar.Calendar() type(c) # an attribute c.firstweekday # a method/function list(c.iterweekdays())
_____no_output_____
MIT
2-Chapter-2/Python Basics.ipynb
DiegoMerino28/Practical-Data-Science-with-Python
Uso ed zip()
# Para practicar el uso de zip() vamos a extraer los nombres de las columnas del dataframe y una de las filas de la data. colnames = list(df) row_df = list(df.loc[1]) # Zip sirve para unir dos listas en forma de diccionario. # Creo el objeto zipped_list con la función zip() zipped_list = zip(colnames, row_df) # Impr...
_____no_output_____
MIT
Preprocessing/wordIndicators.ipynb
samp891216/Portafolio-SERGIO-MARIN
Kickoff - CHALLENGE - RAIS**E**xploratory **D**ata **A**nalysis on RAIS Database - Florianópolis, SC - Brasil**Authors:**- Luis Felipe Pelison- Fernando Battisti- Ígor Yamamoto ObjectiveHow socialeconomic characteristics impacts how much you earn? ImportsHere is where you declare the external dependencies required fo...
import pandas as pd import numpy as np # here you can import your libraries pd.set_option('max_rows', 200)
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Open DataHere is where your data is loaded from different file formats (e.g.: .csv, .json, .parquet, .xlsx) into pandas data frames
df = pd.read_parquet('data/rais_floripa_2018.parquet') print(df.shape) df.head(2)
(432486, 16)
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Pre ProcessingThe real world is a mess. We need to do some manipulations in order to clean the data. Converting to the right typesIn order to be eaiser or possible to operate, we need to assign the most appropriate type for each column
CAT_FEATURES = ['CNAE 2.0 Subclasse', 'Escolaridade após 2005', 'Mês Admissão', 'Mês Desligamento', 'Motivo Desligamento', 'Município', 'Raça Cor', 'Sexo Trabalhador', 'Tamanho Estabelecimento', 'Tipo Defic', 'UF', 'CBO 2002'] for cat_feat in CAT_FEATURES: df[cat_feat] = df[cat_feat].astype('str') df['Tempo Empre...
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Mapping categoriesSometimes, real categories are not so understanble, then we map to more readable ones
df['Tamanho Estabelecimento'].value_counts() df['Tamanho Estabelecimento'] = ( df['Tamanho Estabelecimento'] .map( { '1': 'ZERO', '2': 'ATE_4', '3': 'DE_5_A_9', '4': 'DE_10_A_19', '5': 'DE_20_A_49', '6': 'DE_50_A_99', '7...
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Removing wrong categoriesSometimes there are wrong or meaningless categories. In those cases we need to treat this.
df['CBO 2002'] = df['CBO 2002'].apply(lambda x: 'Unknown' if x == '0000-1' else x)
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
AnalysisNow we can do our exploratory analysis. Be criative!
# All from Florianopolis df['Município'].value_counts()
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Challenge 0. List the most popular occupations (CBO)
# Code here
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
Main ChallengeHere we will develop the answer for the main challenge described at the beginning.
# Code here
_____no_output_____
MIT
RAIS/kickoff.ipynb
dsfloripa/challenges
A Simple ExampleThe first step is to prepare your data. Here we use the [IMDBdataset](https://keras.io/datasets/imdb-movie-reviews-sentiment-classification) asan example.
import numpy as np from tensorflow.keras.datasets import imdb # Load the integer sequence the IMDB dataset with Keras. index_offset = 3 # word index offset (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000, index_from=index_offset) y_train = y_t...
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
The second step is to run the [TextClassifier](/text_classifier).
import autokeras as ak # Initialize the text classifier. clf = ak.TextClassifier( overwrite=True, max_trials=1) # It tries 10 different models. # Feed the text classifier with training data. clf.fit(x_train, y_train, epochs=2) # Predict with the best model. predicted_y = clf.predict(x_test) # Evaluate the best...
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
Validation DataBy default, AutoKeras use the last 20% of training data as validation data.As shown in the example below, you can use `validation_split` to specify the percentage.
clf.fit(x_train, y_train, # Split the training data and use the last 15% as validation data. validation_split=0.15)
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
You can also use your own validation setinstead of splitting it from the training data with `validation_data`.
split = 5000 x_val = x_train[split:] y_val = y_train[split:] x_train = x_train[:split] y_train = y_train[:split] clf.fit(x_train, y_train, epochs=2, # Use your own validation set. validation_data=(x_val, y_val))
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
Customized Search SpaceFor advanced users, you may customize your search space by using[AutoModel](/auto_model/automodel-class) instead of[TextClassifier](/text_classifier). You can configure the[TextBlock](/block/textblock-class) for some high-level configurations, e.g., `vectorizer`for the type of text vectorization...
import autokeras as ak input_node = ak.TextInput() output_node = ak.TextBlock(vectorizer='ngram')(input_node) output_node = ak.ClassificationHead()(output_node) clf = ak.AutoModel( inputs=input_node, outputs=output_node, overwrite=True, max_trials=1) clf.fit(x_train, y_train, epochs=2)
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
The usage of [AutoModel](/auto_model/automodel-class) is similar to the[functional API](https://www.tensorflow.org/guide/keras/functional) of Keras.Basically, you are building a graph, whose edges are blocks and the nodes are intermediate outputs of blocks.To add an edge from `input_node` to `output_node` with`output_n...
import autokeras as ak input_node = ak.TextInput() output_node = ak.TextToIntSequence()(input_node) output_node = ak.Embedding()(output_node) # Use separable Conv layers in Keras. output_node = ak.ConvBlock(separable=True)(output_node) output_node = ak.ClassificationHead()(output_node) clf = ak.AutoModel( inputs=i...
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
Data FormatThe AutoKeras TextClassifier is quite flexible for the data format.For the text, the input data should be one-dimensional For the classification labels, AutoKeras accepts both plain labels, i.e. strings orintegers, and one-hot encoded encoded labels, i.e. vectors of 0s and 1s.We also support using [tf.data....
import tensorflow as tf train_set = tf.data.Dataset.from_tensor_slices(((x_train, ), (y_train, ))).batch(32) test_set = tf.data.Dataset.from_tensor_slices(((x_test, ), (y_test, ))).batch(32) clf = ak.TextClassifier( overwrite=True, max_trials=3) # Feed the tensorflow Dataset to the classifier. clf.fit(train_se...
_____no_output_____
Apache-2.0
docs/ipynb/text_classification.ipynb
ivynasantino/autokeras
Data Preparation
" Loading the dataset " datasets_path = os.path.join(absFilePath, 'Datasets\\') url = datasets_path + 'data_bike_hour.csv' df = pd.read_csv(url) df = df.drop(['instant','dteday','casual','registered'],axis =1) " Handling some data " df = df.drop(df[df.weathersit == 4].index) df[df["weathersit"] == 4] " Decode Categor...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
Neighbors Generation
nb_neighbors = 20 list_neigh = generate_all_neighbors(data_test,numerical_cols,categorical_cols,nb_neighbors) " store all the neighbors together " n = np.size(data_test,0) all_neighbors = list_neigh[0] for i in range(1,n) : all_neighbors = np.concatenate((all_neighbors, list_neigh[i]), axis=0) " One hot enco...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
One hot encoding for the training and the test sets
data_train_df['weekday'] = data_train_df['weekday'].replace(weekday_mapper) data_train_df['holiday'] = data_train_df['holiday'].replace(holiday_mapper) data_train_df['workingday'] = data_train_df['workingday'].replace(workingday_mapper) data_train_df['season'] = data_train_df['season'].replace(season_mapper) data_train...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
Training the MLP model
" Sklearn MLP regressor " mlp = make_pipeline(StandardScaler(), MLPRegressor(hidden_layer_sizes=(50, 50), tol=1e-2, max_iter=1000, random_state=0)) model_nt = mlp.fit(data_train, target_train) targe...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
Execution of Split Based Selection Form Algorithm : Discretization : Equal Frequency
split_point = len(numerical_cols) nb_models = 100 L_Subgroups_freq = [] L_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_point,4)[0]) L_Subgroups_freq.append(SplitBasedSelectionForm_freq (data_test, target_test, nb_models, model_nt, list_neigh,split_po...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
Discretization : Equal Width
L_Subgroups_width = [] L_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,4)[0]) L_Subgroups_width.append(SplitBasedSelectionForm_width (data_test, target_test, nb_models, model_nt, list_neigh,split_point,5)[0]) L_Subgroups_width.append(SplitBase...
_____no_output_____
MIT
tabular data/regression/Benchmarks/1. bike/supplementary tests/bike_disc.ipynb
RemilYoucef/SPLITSD4X
deeptax Description: Deep learning taxonomic classification
import socket print(socket.gethostname()) import sys sys.path.append('../deeptax') import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
notebooks/1.0-deeptax_start.ipynb
charlos1204/deeptax
_____no_output_____
MIT
Untitled2.ipynb
atapin/Caricature-Your-Face
Histogram equalization
import numpy as np import tifffile as tif import cv2 import matplotlib.pyplot as plt %matplotlib inline def read_tif(fname): t = tif.imread(fname) img = np.zeros(t.shape) img[:,:] = tif.imread(fname) return img def normalize(tile): vmin = tile.min(); vmax = tile.max() new_tile...
_____no_output_____
MIT
notebooks/.ipynb_checkpoints/historam_equalization-checkpoint.ipynb
jabae/CMontage
Maximizing nongaussianity__Group ALT: Andreea, Laura, Tien __ Exercise H6.1: Kurtosis of Toy Data
import numpy as np import matplotlib.pyplot as plt import matplotlib.gridspec as gridspec import scipy.io as sio import seaborn as sns ########### TIEN # https://gist.github.com/dkapitan/fcf45a97caaf48bc3d6be17b5f8b213c class SeabornFig2Grid(): def __init__(self, seaborngrid, fig, subplot_spec): self.fig ...
_____no_output_____
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
__(a) Apply the mixing matrix $\mathbf{A}$ to the original sources $\mathbf{s}$.__
A = np.array([[4,3],[2,1]]) x_normal = A @ s_normal x_laplac = A @ s_laplac x_unifor = A @ s_unifor
_____no_output_____
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
__(b) Center the mixtures $\mathbf{x}$ to zero mean.__
x_normal_cent = x_normal - np.mean(x_normal, axis = 1).reshape(2,1) x_laplac_cent = x_laplac - np.mean(x_laplac, axis = 1).reshape(2,1) x_unifor_cent = x_unifor - np.mean(x_unifor, axis = 1).reshape(2,1)
_____no_output_____
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
__(c) Decorrelate the mixtures from (b) by applying principal component analysis (PCA) on themand project them onto the PCs.__
_, eigvals_normal ,pca_normal = principal_components(x_normal_cent) _, eigvals_laplac ,pca_laplac = principal_components(x_laplac_cent) _, eigvals_unifor ,pca_unifor = principal_components(x_unifor_cent) x_normal_decorr = pca_normal.T @ x_normal_cent x_laplac_decorr = pca_laplac.T @ x_laplac_cent x_unifor_decorr = pca_...
_____no_output_____
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
__(e) Rotate the whitened mixtures by different angles $\theta$ and calculate the (excess) kurtosis empirically for each dimension in $\mathbf{x}$.__
def rotate(theta): return np.array([[np.cos(theta), -np.sin(theta)],[np.sin(theta), np.cos(theta)]]) def kurtosis(x, theta): R = rotate(theta) x_theta = R @ x kurt = np.mean(x_theta**4, axis =1) - 3 return kurt thetas = np.arange(0,2*np.pi+np.pi/50, np.pi/50) kurt_normal = np.zeros((len(thetas),2)...
_____no_output_____
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
__(f) Find the minimum and maximum kurtosis value for the first dimension and rotate the data accordingly.__
theta_normal_min, theta_normal_max = thetas[np.argmin(kurt_normal.T[0])],thetas[np.argmax(kurt_normal.T[0])] theta_laplac_min, theta_laplac_max = thetas[np.argmin(kurt_laplac.T[0])],thetas[np.argmax(kurt_laplac.T[0])] theta_unifor_min, theta_unifor_max = thetas[np.argmin(kurt_unifor.T[0])],thetas[np.argmax(kurt_unifor....
Minimum kurtosis value of the normal distribution: -0.07290092529895587 Maximum kurtosis value of the normal distribution: 0.0013696690943216794 Minimum kurtosis value of the Laplace distribution: 1.584103194281238 Maximum kurtosis value of the Laplace distribution: 3.0186505751018835 Minimum kurtosis value of the un...
MIT
Sheet06_Laura.ipynb
lauraflyra/MI_2git
Python tutorial- [Basics](Basics): Math, Variables, Functions, Control flow, Modules- [Data representation](Data-representation): String, Tuple, List, Set, Dictionary, Objects and Classes- [Standard library modules](Standard-library-modules): script arguments, file operations, timing, processes, forks, multiprocessing...
# This is a line comment. """ A multi-line comment. """ a = None #Just declared an empty object print(a) a = 1 print(a) a = 'abc' print(a) b = 3 c = [1, 2, 3] a = [a, 2, b, 1., 1.2e-5, True] #This is a list. print(a) ## Python is a dynamic language a = 1 print(type(a)) print(a) a = "spam" print(type(a)) print(a) a = 1 ...
abc
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Now let us switch the values of two variables.
print(a, b, c) t = c c = b b = t print(a, b, c)
['abc', 2, 3, 1.0, 1.2e-05, True] 3 [1, 2, 3] ['abc', 2, 3, 1.0, 1.2e-05, True] [1, 2, 3] 3
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Math operations Arithmetic
a = 2 b = 1 b = a*(5 + b) + 1/0.5 print(b) d = 1/a print(d)
14.0 0.5
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Logical operations:
a = True b = 3 print(b == 5) print(a == False) print(b < 6 and not a) print(b < 6 or not a) print(b < 6 and (not a or not b == 3)) print(False and True) True == 1
_____no_output_____
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
FunctionsFunctions are a great way to separate code into readable chunks. The exact size and number of functions needed to solve a problem will affect readability.New concepts: indentation, namespaces, global and local scope, default parameters, passing arguments by value or by reference is meaningless in Python, what...
## Indentation and function declaration, parameters of a function def operation(a, b): c = 2*(5 + b) + 1/0.5 a = 1 return a, c a = None mu = 2 operation(mu, 1) a, op = operation(a, 1) print(a, op) # Function scope, program workflow def f(a): print("inside the scope of f():") a = 4 print("a =", ...
0 1 f2: 1
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Task:- Define three functions, f, g and h. Call g and h from inside f. Run f on some value v.- You can also have functions that are defined inside the namespace of another function. Try it! Data typesEverything is an object in Python, and every object has an ID (or identity), a type, and a value. This means that whene...
from IPython.display import Image Image(url= "../img/mutability.png", width=400, height=400) i = 43 print(id(i)) print(type(i)) print(i) i = 42 print(id(i)) print(type(i)) print(i) i = 43 print(id(i)) print(type(i)) print(i) i = i + 1 print(id(i)) print(type(i)) print(i) # assignments reference the same object as i i...
[1, 2, 3] 1799133954760 <class 'list'> [1, 2] 1799133954760
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Question:- Why weren't all data types made mutable only, or immutable only?Below, if ints would have been mutable, you would expect both variables to be updated. But you normally want variables pointing to ints to be independent.
a = 5 b = a a += 5 print(a, b) ## A list however is mutable datatype in Python x = [1, 2, 3] y = x print(x, y) # [1, 2, 3] y += [3, 2, 1] print(x, y) # [1, 2, 3, 3, 2, 1] ## String mutable? No def func(val): val += 'bar' return val x = 'foo' print(x) # foo print(func(x)) print(x) # foo ## List mutable? Yes. de...
[1, 2, 3] [1, 2, 3, 3, 2, 1] [1, 2, 3, 3, 2, 1]
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
**Control flow**There are two major types of programming languages, procedural and functional. Python is mostly procedural, with very simple functional elements. Procedural languages typicaly have very strong control flow specifications. Programmers spend time specifying how a program should run. In functional language...
# for loops for b in [1, 2, 3]: print(b) # while, break and continue b = 0 while b < 10: b += 1 a = 2 if b%a == 0: #break continue print(b) # Now do the same, but using the for loop ## if else: use different logical operators and see if it makes sense a = 1 if a == 3: print('3')...
division by zero! executing finally code block..
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Python modules```import xls"How can you simply import Excel !?!"```- How Python is structured:Packages are the way code libraries are distributed. Libraries contain one or several modules. Each module can contain object classes, functions and submodules.- Object introspection.It happens often that some Python code tha...
import math print(dir()) print(dir(math)) print(help(math.log)) a = 3 print(type(a)) import numpy print(numpy.__version__) import os print(os.getcwd())
['In', 'Out', '_', '_1', '_7', '__', '___', '__builtin__', '__builtins__', '__doc__', '__loader__', '__name__', '__package__', '__spec__', '_dh', '_i', '_i1', '_i10', '_i11', '_i12', '_i13', '_i2', '_i3', '_i4', '_i5', '_i6', '_i7', '_i8', '_i9', '_ih', '_ii', '_iii', '_oh', '_sh', 'a', 'b', 'c', 'd', 'exit', 'func', '...
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
**Task:**- Compute the distance between 2D points.- `d(p1, p2)=sqrt((x1-x2)**2+(y1-y2)**2), where pi(xi,yi)`- Define a module containing a function that computes the euclidian distance. Use the Spyder code editor and save the module on your filesystem.- Import that module into a new code cell bellow.- Make the module l...
""" %run full(relative)path/distance.py or os.setcwd(path) """ import distance print(distance.euclidian(1, 2, 4.5 , 6)) from distance import euclidian print(euclidian(1, 2, 4.5 , 6)) import distance as d print(d.euclidian(1, 2, 4.5 , 6)) import sys print(sys.path) sys.path.append('/my/custom/path') print(sys.path)
['', '/home/sergiu/programs/miniconda3/envs/lts/lib/python36.zip', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/lib-dynload', '/home/sergiu/programs/miniconda3/envs/lts/lib/python3.6/site-packages', '/home/sergiu/programs/miniconda3/envs/lts/lib/pyt...
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Data representation Strings
#String declarations statement = "Gene IDs are great. My favorite gene ID is" name = "At5G001024" statement = statement + " " + name print(statement) statement2 = 'Genes names \n \'are great. My favorite gene name is ' + 'Afldtjahd' statement3 = """ Gene IDs are great. My favorite genes are {} and {}.""".format(name, ...
Gene IDs are great. My favorite gene ID is At5G001024 Gene blabla At5G0
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
TuplesA few pros for tuples:- Tuples are faster than lists- Tuples can be keys to dictionaires (they are immutable types)
#a tupple is an immutable list a = (1, "spam", 5) #a.append("eggs") print(a[1]) b = (1, "one") c = (a, b, 3) print(c) #unpacking a collection into positional arguments def sum(a, b): return a + b values = (5, 2) s = sum(*values) print(s)
spam ((1, 'spam', 5), (1, 'one'), 3) 7
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Lists
a = [1,"one",(2,"two")] print(a[0]) print(a) a.append(3) print(a) b = a + a[:2] print(b) ## slicing and indexing print(b[2:5]) del a[-1] print(a) print(a.index("one")) print(len(a)) ## not just list size but list elements too are scoping free! (list is mutable) def f(a, b): a[1] = "changed" b = [1,2] retur...
['e', 'b', ['ab', 'ba']] ['a', 'b', ['ab', 'd']] ['c', 'b', ['ab', 'd']] ['a', 'b', ['ab', 'ba']]
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
SetsSets have no order and cannot include identical elements. Use them when the position of elements is not relevant. Finding elements is faster than in a list. Also set operations are more straightforward. A frozen set has a hash value. Task:- Find on the Internet the official reference documentation for the Python s...
# set vs. frozenset s = set() #s = frozenset() s.add(1) s = s | set([2,"three"]) s |= set([2,"three"]) s.add(2) s.remove(1) print(s) print("three" in s) s1 = set(range(10)) s2 = set(range(5,15)) s3 = s1 & s2 print(s1, s2, s3) s3 = s1 - s2 print(s1, s2, s3) print(s3 <= s1) s3 = s1 ^ s2 print(s1, s2, s3)
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {8, 9, 5, 6, 7} {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {0, 1, 2, 3, 4} True {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} {5, 6, 7, 8, 9, 10, 11, 12, 13, 14} {0, 1, 2, 3, 4, 10, 11, 12, 13, 14}
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Dictionary- considered one of the most elegant data structure in Python- A set of key: value pairs.- Keys must be hashable elements, values can be any Python datatype.- The keys of the dictionary are hashable i.e. the are generated by hashing function which generates unique result for each unique value supplied to the...
d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 80, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50} d d = {} d['geneid10'] = 110 d #Creation: dict(list) genes = ['geneid1', 'geneid2', 'geneid3'] values = [20, 30, 40] d = dict(zip(genes, values)) print(d) #Creation: dictionary comprehensions d2 = { 'geneid'+str(i):10*(i+1) ...
{'geneid4': 50, 'geneid5': 60, 'geneid6': 70, 'geneid7': 80, 'geneid8': 90, 'geneid9': 100} dict_keys(['geneid4', 'geneid5', 'geneid6', 'geneid7', 'geneid8', 'geneid9']) dict_values([50, 60, 70, 80, 90, 100]) geneid4 50 geneid5 60 geneid6 70 geneid7 80 geneid8 90 geneid9 100
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Task:Find the dictionary key corresponding to a certain value. Why is Python not offering a native method for this?
d = {'geneid9': 100, 'geneid8': 90, 'geneid7': 90, 'geneid6': 70, 'geneid5': 60, 'geneid4': 50} def getkey(value): ks = set() # .. your code here return ks print(getkey(90))
set()
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Objects and ClassesEverything is an object in Python and every variable is a reference to an object. References map the adress in memory where an object lies. However this is kept hidden in Python. C was famous for not cleaning up automatically the adress space after alocating memory for its data structures. This was ...
class Dog(object): def __init__(self, name): self.name = name return def bark_if_called(self, call): if call[:-1]==self.name: print("Woof Woof!") else: print("*sniffs..") return def get_ball(self): print(self.name + " bri...
*sniffs.. Woof Woof! *drools Georgie brings back ball *hates you Georgie
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Decorators
from time import sleep def sleep_decorator(function): """ Limits how fast the function is called. """ def wrapper(*args, **kwargs): sleep(2) return function(*args, **kwargs) return wrapper @sleep_decorator def print_number(num): return num print(print_number(222)) for...
222 1 2 3 4 5
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Standard library modules https://docs.python.org/3/library/- sys - system-specific parameters and functions- os - operating system interface- shutil - shell utilities- math - mathematical functions and constants- random - pseudorandom number generator- timeit - time it- format - number and text formating- zlib - file ...
import sys print(sys.argv) sys.exit() ##getopt, sys.exit() ##getopt.getopt(args, options[, long_options]) # import getopt # try: # opts, args = getopt.getopt(sys.argv[1:],"hi:o:",["ifile=","ofile="]) # except getopt.GetoptError: # print 'test.py -i <inputfile> -o <outputfile>' # sys.exit(2) # for opt, arg ...
['/home/sergiun/programs/anaconda3/envs/py35/lib/python3.5/site-packages/ipykernel/__main__.py', '-f', '/run/user/1000/jupyter/kernel-13c582f7-e031-4ca3-8c2e-ec3cc87d2d2c.json']
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Task: - Create a second script that contains command line arguments and imports the distance module above. If an -n 8 is provided in the arguments, it must generate 8 random points and compute a matrix of all pair distances. os module: File operationsThe working directory, file IO, copy, rename and delete
import os print(os.getcwd()) #os.chdir(newpath) os.system('mkdir testdir') f = open('testfile.txt','wt') f.write('One line of text\n') f.write('Another line of text\n') f.close() import shutil #shutil.copy('testfile.txt', 'testdir/') shutil.copyfile('testfile.txt', 'testdir/testfile1.txt') shutil.copyfile('testfile.t...
/home/sergiun/projects/work/course One line of text Another line of text testfile2.txt testfile1.txt ['testdir/file1.txt', 'testdir/file2.txt']
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Task:- Add a function to save the random vectors and the generated matrix into a file. Timing
from datetime import datetime startTime = datetime.now() n = 10**8 for i in range(n): continue print datetime.now() - startTime
0:00:06.661880
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
ProcessesLaunching a process, Paralellization: shared resources, clusters, clouds
import os #print os.system('/path/yourshellscript.sh args') subprocess.run(["ls", "-l", "/dev/null"], stdout=subprocess.PIPE) subprocess.run("exit 1", shell=True, check=True) from subprocess import call call(["ls", "-l"]) args = ['/path/yourshellscript.sh', '-arg1', 'value1', '-arg2', 'value2'] p = Popen(args, shel...
_____no_output_____
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
How to do the equivalent of shell piping in Python? This is the basic step of an automated pipeline.`cat test.txt | grep something`**Task**:- Test this!- Uncomment `p1.stdout.close()`. Why is it not working?- What are signals? Read about SIGPIPE.
p1 = Popen(["cat", "test.txt"], stdout=PIPE) p2 = Popen(["grep", "something"], stdin=p1.stdout, stdout=PIPE) p1.stdout.close() output = p2.communicate()[0]
_____no_output_____
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Questions:- What are the Python's native datatypes? Have a look at the Python online documentation for each datatype.- How many data types does Python have?- Python is a "dynamic" language. What does it mean?- Python is an "interpreted" language. What does it mean?- Which data strutures are mutable and which are immuta...
def run(l=[]): l.append(len(l)) return l print(run()) print(run()) print(run())
[0] [0, 1] [0, 1, 2]
CC0-1.0
day1/tutorial.ipynb
grokkaine/biopycourse
Preprocessors For MDX> Custom preprocessors that help convert notebook content into MDXThis module defines [nbconvert.Custom Preprocessors](https://nbconvert.readthedocs.io/en/latest/nbconvert_library.htmlCustom-Preprocessors) that facilitate transforming notebook content into MDX, which is a variation of markdown. C...
# export from nbconvert.preprocessors import Preprocessor from nbconvert import MarkdownExporter from nbconvert.preprocessors import TagRemovePreprocessor from nbdev.imports import get_config from traitlets.config import Config from pathlib import Path import re, uuid from fastcore.basics import AttrDict from nbdoc.med...
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
Injecting Metadata Into Cells -
#export class InjectMeta(Preprocessor): """ Allows you to inject metadata into a cell for further preprocessing with a comment. """ pattern = r'(^\s*#(?:cell_meta|meta):)(\S+)(\s*[\n\r])' def preprocess_cell(self, cell, resources, index): if cell.cell_type == 'code' and re.search(_re_me...
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
To inject metadata make a comment in a cell with the following pattern: `cell_meta:{key=value}`. Note that `meta` is an alias for `cell_meta`For example, consider the following code:
_test_file = 'test_files/hello_world.ipynb' first_cell = read_nb(_test_file)['cells'][0] print(first_cell['source'])
#meta:show_steps=start,train print('hello world')
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
At the moment, this cell has no metadata:
print(first_cell['metadata'])
{}
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
However, after we process this notebook with `InjectMeta`, the appropriate metadata will be injected:
c = Config() c.NotebookExporter.preprocessors = [InjectMeta] exp = NotebookExporter(config=c) cells, _ = exp.from_filename(_test_file) first_cell = json.loads(cells)['cells'][0] assert first_cell['metadata'] == {'nbdoc': {'show_steps': 'start,train'}} first_cell['metadata']
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
Strip Ansi Characters From Output -
#export _re_ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])') class StripAnsi(Preprocessor): """Strip Ansi Characters.""" def preprocess_cell(self, cell, resources, index): for o in cell.get('outputs', []): if o.get('name') and o.name == 'stdout': o['t...
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
Gets rid of colors that are streamed from standard out, which can interfere with static site generators:
c, _ = run_preprocessor([StripAnsi], 'test_files/run_flow.ipynb') assert not _re_ansi_escape.findall(c) # export def _get_cell_id(id_length=36): "generate random id for artifical notebook cell" return uuid.uuid4().hex[:id_length] def _get_md_cell(content="<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT!...
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
Insert Warning Into Markdown -
# export class InsertWarning(Preprocessor): """Insert Autogenerated Warning Into Notebook after the first cell.""" def preprocess(self, nb, resources): nb.cells = nb.cells[:1] + [_get_md_cell()] + nb.cells[1:] return nb, resources
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
This preprocessor inserts a warning in the markdown destination that the file is autogenerated. This warning is inserted in the second cell so we do not interfere with front matter.
c, _ = run_preprocessor([InsertWarning], 'test_files/hello_world.ipynb', display_results=True) assert "<!-- WARNING: THIS FILE WAS AUTOGENERATED!" in c
```python #meta:show_steps=start,train print('hello world') ``` <CodeOutputBlock lang="python"> ``` hello world ``` </CodeOutputBlock> <!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! Instead, edit the notebook w/the location & name as this file. --> ```python ```
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc
Remove Empty Code Cells -
# export def _emptyCodeCell(cell): "Return True if cell is an empty Code Cell." if cell['cell_type'] == 'code': if not cell.source or not cell.source.strip(): return True else: return False class RmEmptyCode(Preprocessor): """Remove empty code cells.""" def preprocess(self, nb, resources):...
_____no_output_____
Apache-2.0
nbs/mdx.ipynb
outerbounds/nbdoc