markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Squelching Line OutputYou might have noticed the annoying line of the form `[]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
plt.plot(X);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Adding Axis LabelsNo self-respecting quant leaves a graph without labeled axes. Here are some commands to help with that.
X = np.random.normal(0, 1, 100) X2 = np.random.normal(0, 1, 100) plt.plot(X); plt.plot(X2); plt.xlabel('Time') # The data we generated is unitless, but don't forget units in general. plt.ylabel('Returns') plt.legend(['X', 'X2']);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Generating StatisticsLet's use `numpy` to take some simple statistics.
Y = np.mean(X) Y Y = np.std(X) Y
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Getting Real Pricing DataRandomly sampled data can be great for testing ideas, but let's get some real data. We can use `get_pricing` to do that. You can use the `?` syntax as discussed above to get more information on `get_pricing`'s arguments.
!pip install yfinance !pip install yahoofinancials import yfinance as yf from yahoofinancials import YahooFinancials # Reference: https://towardsdatascience.com/a-comprehensive-guide-to-downloading-stock-prices-in-python-2cd93ff821d4 data = yf.download('MSFT', start='2012-01-01', end='2015-06-01', progress=False)
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Our data is now a dataframe. You can see the datetime index and the colums with different pricing data.
data
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
This is a pandas dataframe, so we can index in to just get price like this. For more info on pandas, please [click here](http://pandas.pydata.org/pandas-docs/stable/10min.html).
X = data['Open'] X
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Because there is now also date information in our data, we provide two series to `.plot`. `X.index` gives us the datetime index, and `X.values` gives us the pricing values. These are used as the X and Y coordinates to make a graph.
plt.plot(X.index, X.values) plt.ylabel('Price') plt.legend(['MSFT']); np.mean(X) np.std(X)
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Getting Returns from PricesWe can use the `pct_change` function to get returns. Notice how we drop the first element after doing this, as it will be `NaN` (nothing -> something results in a NaN percent change).
R = X.pct_change()[1:]
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
We can plot the returns distribution as a histogram.
plt.hist(R, bins=20) plt.xlabel('Return') plt.ylabel('Frequency') plt.legend(['MSFT Returns']);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Get statistics again.
np.mean(R) np.std(R)
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Now let's go backwards and generate data out of a normal distribution using the statistics we estimated from Microsoft's returns. We'll see that we have good reason to suspect Microsoft's returns may not be normal, as the resulting normal distribution looks far different.
plt.hist(np.random.normal(np.mean(R), np.std(R), 10000), bins=20) plt.xlabel('Return') plt.ylabel('Frequency') plt.legend(['Normally Distributed Returns']);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Generating a Moving Average`pandas` has some nice tools to allow us to generate rolling statistics. Here's an example. Notice how there's no moving average for the first 60 days, as we don't have 60 days of data on which to generate the statistic.
# Take the average of the last 60 days at each timepoint. MAVG = X.rolling(60) plt.plot(X.index, X.values) plt.ylabel('Price') plt.legend(['MSFT', '60-day MAVG']);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
#@markdown Before starting please save the notebook in your drive by clicking on `File -> Save a copy in drive` #@markdown Check how many CPUs we have, you can choose a high memory instance to get 4. import os print(f"We have {os.cpu_count()} CPU cores.") #@markdown Mount google drive from google.colab import drive, ou...
_____no_output_____
MIT
Resample_Audio.ipynb
materialvision/melgan-neurips
Talktorial 5 Compound clustering Developed in the CADD seminars 2017 and 2018, AG Volkamer, Charité/FU Berlin Calvinna Caswara and Gizem Spriewald Aim of this talktorialSimilar compounds might bind to the same targets and show similar effects. Based on this similar property principle, compound similarity can be used ...
from IPython.display import IFrame IFrame('images/butina_full.pdf', width=600, height=300)
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
*Figure 1:* Theoretical example of the Butina clustering algorithm, drawn by Calvinna Caswara. Picking diverse compoundsFinding representative sets of compounds is a concept often used in pharmaceutical industry.* Let's say, we applied a virtual screening campaign but only have a limited amount of resources to experim...
# Import packages import pandas as pd import numpy import matplotlib.pyplot as plt import time import random from random import choices from rdkit import Chem from rdkit.Chem import AllChem from rdkit import DataStructs from rdkit.DataStructs import cDataStructs from rdkit.ML.Cluster import Butina from rdkit.Chem impor...
Number of compounds converted: 4925 Fingerprint length per compound: 2048
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
2. Tanimoto similarity and distance matrixNow that we generated fingerprints, we move on to the next step: The identification of potential cluster centroids. For this, we define functions to calculate the Tanimoto similarity and distance matrix.
# Calculate distance matrix for fingerprint list def Tanimoto_distance_matrix(fp_list): dissimilarity_matrix = [] for i in range(1,len(fp_list)): similarities = DataStructs.BulkTanimotoSimilarity(fp_list[i],fp_list[:i]) # Since we need a distance matrix, calculate 1-x for every element in simila...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
See also [rdkit Cookbook: Clustering molecules](http://rdkit.org/docs/Cookbook.htmlclustering-molecules).
# Example: Calculate single similarity of two fingerprints sim = DataStructs.TanimotoSimilarity(fingerprints[0],fingerprints[1]) print ('Tanimoto similarity: %4.2f, distance: %4.2f' %(sim,1-sim)) # Example: Calculate distance matrix (distance = 1-similarity) Tanimoto_distance_matrix(fingerprints)[0:5] # Side note: That...
12125350 12125350
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
3. Clustering molecules: Centroids and exclusion spheresIn this part, we cluster the molecules and look at the results. Define a clustering function.
# Input: Fingerprints and a threshold for the clustering def ClusterFps(fps,cutoff=0.2): # Calculate Tanimoto distance matrix distance_matr = Tanimoto_distance_matrix(fps) # Now cluster the data with the implemented Butina algorithm: clusters = Butina.ClusterData(distance_matr,len(fps),cutoff,isDistData...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Cluster the molecules based on their fingerprint similarity.
# Run the clustering procedure for the dataset clusters = ClusterFps(fingerprints,cutoff=0.3) # Give a short report about the numbers of clusters and their sizes num_clust_g1 = len([c for c in clusters if len(c) == 1]) num_clust_g5 = len([c for c in clusters if len(c) > 5]) num_clust_g25 = len([c for c in clusters if ...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
How to pick a reasonable cutoff?Since the clustering result depends on the threshold chosen by the user, we will have a closer look on the choice of a cutoff.
for i in numpy.arange(0., 1.0, 0.1): clusters = ClusterFps(fingerprints,cutoff=i) fig = plt.figure(1, figsize=(10, 4)) plt1 = plt.subplot(111) plt.axis([0, len(clusters), 0, len(clusters[0])+1]) plt.xlabel('Cluster index', fontsize=20) plt.ylabel('Number of molecules', fontsize=20) plt.tick_...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
As you can see, the higher the threshold (distance cutoff), the more molecules are considered as similar and, therefore, clustered into less clusters.The lower the threshold, the more small clusters and "singletons" appear.* The smaller the distance value cut-off, the more similar the compounds are required to be to be...
dist_co = 0.2 clusters = ClusterFps(fingerprints,cutoff=dist_co) # Plot the size of the clusters - save plot fig = plt.figure(1, figsize=(8, 2.5)) plt1 = plt.subplot(111) plt.axis([0, len(clusters), 0, len(clusters[0])+1]) plt.xlabel('Cluster index', fontsize=20) plt.ylabel('# molecules', fontsize=20) plt.tick_params(...
Number of clusters 1225 from 4925 molecules at distance cut-off 0.20 Number of molecules in largest cluster: 146 Similarity between two random points in same cluster 0.82 Similarity between two random points in different cluster 0.22
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Cluster visualization 10 examples from largest clusterNow, let's have a closer look at the first 10 molecular structures of the first/largest clusters.
print ('Ten molecules from largest cluster:') # Draw molecules Draw.MolsToGridImage([mols[i][0] for i in clusters[0][:10]], legends=[mols[i][1] for i in clusters[0][:10]], molsPerRow=5) # Save molecules from largest cluster for MCS analysis in Talktorial 9 w = Chem.SDWriter('...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
10 examples from second largest cluster
print ('Ten molecules from second largest cluster:') # Draw molecules Draw.MolsToGridImage([mols[i][0] for i in clusters[1][:10]], legends=[mols[i][1] for i in clusters[1][:10]], molsPerRow=5)
Ten molecules from second largest cluster:
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
The first ten molecules in the respective clusters look indeed similar to each other and many share a common scaffold (visually detected). See **talktorial 6** for more information on how to calculate the maximum common substructure (MCS) of a set of molecules. Examples from first 10 clustersFor comparison, we have a ...
print ('Ten molecules from first 10 clusters:') # Draw molecules Draw.MolsToGridImage([mols[clusters[i][0]][0] for i in range(10)], legends=[mols[clusters[i][0]][1] for i in range(10)], molsPerRow=5)
Ten molecules from first 10 clusters:
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Save cluster centers from first 3 clusters as SVG file.
# Generate image img = Draw.MolsToGridImage([mols[clusters[i][0]][0] for i in range(0,3)], legends=["Cluster "+str(i) for i in range(1,4)], subImgSize=(200,200), useSVG=True) # Get SVG data molsvg = img.data # Replace non-transparent to transparent background and set font siz...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
While still some similarity is visible, clearly, the centroids from the different clusters look more dissimilar then the compounds within one cluster. Intra-cluster Tanimoto similaritiesWe can also have a look at the intra-cluster Tanimoto similarities.
# Function to compute Tanimoto similarity for all pairs of fingerprints in each cluster def IntraTanimoto(fps_clusters): intra_similarity =[] # Calculate intra similarity per cluster for k in range(0,len(fps_clusters)): # Tanimoto distance matrix function converted to similarity matrix (1-distance) ...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Compound pickingIn the following, we are going to pick a final list of **max. 1000 compounds** as a **diverse** subset. For this, we take the cluster centroid from each cluster (i.e. the first molecule of each cluster) and then we take - starting with the largest cluster - for each cluster the 10 molecules (or 50% if ...
# Get the cluster center of each cluster (first molecule in each cluster) clus_center = [mols[c[0]] for c in clusters] # How many cluster centers/clusters do we have? print('Number of cluster centers: ', len(clus_center))
Number of cluster centers: 1225
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Sort clusters by size and molecules in each cluster by similarity.
# Sort the molecules within a cluster based on their similarity # to the cluster center and sort the clusters based on their size clusters_sort = [] for c in clusters: if len(c) < 2: continue # Singletons else: # Compute fingerprints for each cluster element fps_clust = [rdkit_gen.GetFingerprin...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Pick a maximum of 1000 compounds.
# Count selected molecules, pick cluster centers first sel_molecules = clus_center.copy() # Take 10 molecules (or a maximum of 50%) of each cluster starting with the largest one index = 0 diff = 1000 - len(sel_molecules) while diff > 0 and index < len(clusters_sort): # Take indices of sorted clusters tmp_clust...
# Selected molecules: 1225
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
This set of diverse molecules could now be used for experimental testing. (Additional information: run times)At the end of the talktorial, we can play with the size of the dataset and see how the Butina clustering run time changes.
# Reuse old dataset sampled_mols = mols.copy()
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Note that you can try out larger datasets, but data sizes larger than 10000 data points already start to consume quite some memory and time (that's why we stopped there).
# Helper function for time computation def MeasureRuntime(sampled_mols): start_time = time.time() sampled_fingerprints = [rdkit_gen.GetFingerprint(m) for m,idx in sampled_mols] # Run the clustering with the dataset sampled_clusters = ClusterFps(sampled_fingerprints,cutoff=0.3) return(time.time() - ...
_____no_output_____
CC-BY-4.0
talktorials/5_compound_clustering/T5_compound_clustering.ipynb
caramirezs/TeachOpenCADD
Copyright 2018 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Load text with tf.data View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial provides an example of how to use `tf.data.TextLineDataset` to load examples from text files. `TextLineDataset` is designed to create a dataset from a text file, in which...
from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass import tensorflow as tf import tensorflow_datasets as tfds import os
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
The texts of the three translations are by: - [William Cowper](https://en.wikipedia.org/wiki/William_Cowper) — [text](https://storage.googleapis.com/download.tensorflow.org/data/illiad/cowper.txt) - [Edward, Earl of Derby](https://en.wikipedia.org/wiki/Edward_Smith-Stanley,_14th_Earl_of_Derby) — [text](https://storage....
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt'] for name in FILE_NAMES: text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name) parent_dir = os.path.dirname(text_dir) parent_dir
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Load text into datasetsIterate through the files, loading each one into its own dataset.Each example needs to be labeled individually labeled, so use `tf.data.Dataset.map` to apply a labeler function to each one. This will iterate over every example in the dataset, returning (`example, label`) pairs.
def labeler(example, index): return example, tf.cast(index, tf.int64) labeled_data_sets = [] for i, file_name in enumerate(FILE_NAMES): lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name)) labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i)) labeled_data_sets.append(labeled...
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Combine these labeled datasets into a single dataset, and shuffle it.
BUFFER_SIZE = 50000 BATCH_SIZE = 64 TAKE_SIZE = 5000 all_labeled_data = labeled_data_sets[0] for labeled_dataset in labeled_data_sets[1:]: all_labeled_data = all_labeled_data.concatenate(labeled_dataset) all_labeled_data = all_labeled_data.shuffle( BUFFER_SIZE, reshuffle_each_iteration=False)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
You can use `tf.data.Dataset.take` and `print` to see what the `(example, label)` pairs look like. The `numpy` property shows each Tensor's value.
for ex in all_labeled_data.take(5): print(ex)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Encode text lines as numbersMachine learning models work on numbers, not words, so the string values need to be converted into lists of numbers. To do that, map each unique word to a unique integer. Build vocabularyFirst, build a vocabulary by tokenizing the text into a collection of individual unique words. There are...
tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() for text_tensor, _ in all_labeled_data: some_tokens = tokenizer.tokenize(text_tensor.numpy()) vocabulary_set.update(some_tokens) vocab_size = len(vocabulary_set) vocab_size
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Encode examplesCreate an encoder by passing the `vocabulary_set` to `tfds.features.text.TokenTextEncoder`. The encoder's `encode` method takes in a string of text and returns a list of integers.
encoder = tfds.features.text.TokenTextEncoder(vocabulary_set)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
You can try this on a single line to see what the output looks like.
example_text = next(iter(all_labeled_data))[0].numpy() print(example_text) encoded_example = encoder.encode(example_text) print(encoded_example)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Now run the encoder on the dataset by wrapping it in `tf.py_function` and passing that to the dataset's `map` method.
def encode(text_tensor, label): encoded_text = encoder.encode(text_tensor.numpy()) return encoded_text, label def encode_map_fn(text, label): return tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64)) all_encoded_data = all_labeled_data.map(encode_map_fn)
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Split the dataset into text and train batchesUse `tf.data.Dataset.take` and `tf.data.Dataset.skip` to create a small test dataset and a larger training set.Before being passed into the model, the datasets need to be batched. Typically, the examples inside of a batch need to be the same size and shape. But, the example...
train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE) train_data = train_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[])) test_data = all_encoded_data.take(TAKE_SIZE) test_data = test_data.padded_batch(BATCH_SIZE, padded_shapes=([-1],[]))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Now, `test_data` and `train_data` are not collections of (`example, label`) pairs, but collections of batches. Each batch is a pair of (*many examples*, *many labels*) represented as arrays.To illustrate:
sample_text, sample_labels = next(iter(test_data)) sample_text[0], sample_labels[0]
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Since we have introduced a new token encoding (the zero used for padding), the vocabulary size has increased by one.
vocab_size += 1
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Build the model
model = tf.keras.Sequential()
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
The first layer converts integer representations to dense vector embeddings. See the [Word Embeddings](../../tutorials/sequences/word_embeddings) tutorial for more details.
model.add(tf.keras.layers.Embedding(vocab_size, 64))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
The next layer is a [Long Short-Term Memory](http://colah.github.io/posts/2015-08-Understanding-LSTMs/) layer, which lets the model understand words in their context with other words. A bidirectional wrapper on the LSTM helps it to learn about the datapoints in relationship to the datapoints that came before it and aft...
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Finally we'll have a series of one or more densely connected layers, with the last one being the output layer. The output layer produces a probability for all the labels. The one with the highest probability is the models prediction of an example's label.
# One or more dense layers. # Edit the list in the `for` line to experiment with layer sizes. for units in [64, 64]: model.add(tf.keras.layers.Dense(units, activation='relu')) # Output layer. The first argument is the number of labels. model.add(tf.keras.layers.Dense(3, activation='softmax'))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Finally, compile the model. For a softmax categorization model, use `sparse_categorical_crossentropy` as the loss function. You can try other optimizers, but `adam` is very common.
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Train the modelThis model running on this data produces decent results (about 83%).
model.fit(train_data, epochs=3, validation_data=test_data) eval_loss, eval_acc = model.evaluate(test_data) print('\nEval loss: {:.3f}, Eval accuracy: {:.3f}'.format(eval_loss, eval_acc))
_____no_output_____
Apache-2.0
site/en/r2/tutorials/load_data/text.ipynb
crypdra/docs
Introduction. Project is the continuation of web crawling of website fmovies's [most-watched](https://fmovies.to/most-watched) section analysis for the website. This is the second part. In part one we crawled websites and extracted informations. In part two we will tidy and clean the data for analysis in third part.
import pandas as pd import numpy as np movie_df = pd.read_csv('../Data/final_movies_df.csv') tv_df = pd.read_csv('../Data/final_tvs_df.csv') print(movie_df.columns) print(tv_df.columns) movie_df.head()
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Columns- 'movie_name/ tv_name' : Name of movie / tv - 'watch_link': Url link for page to watch movie/tv, - 'date_added': Date added to df not in fmovies- 'site_rank': Ranking in the fmovies by order of most watched starting from 1.- 'Genre': Genres- 'Stars': Cast,- 'IMDb': IMDb ratings,- 'Director': Director, - 'Relea...
movie_df.columns = movie_df.columns.str.upper().tolist() tv_df.columns = tv_df.columns.str.upper().tolist() tv_df.head(2) movie_df.head(2)
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Tidying 1. Genre section has list of values in one row, lets make one value per row.2. Released Data can be converted to date time and then to index of df3. Ratings have to values, 1st is the site ratings and second is number of reviews by viewers. Lets separate them different columns. Genre Split and Date Column Let...
def split_genre(df): cp= df.copy() # Spilt the genre by "," and stack to make muliple rows each with own unique genre # this will return a new df with genres only genre= cp.GENRE.str.split(',').apply(pd.Series, 1).stack() # Pop one of index genre.index = genre.index.droplevel(-1) ...
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Ratings Columns Split
site_user_rating_4movie = movie_df_tidy_1.RATING.str.split("/").str[0] site_number_user_rated_4movie = movie_df_tidy_1.RATING.str.split("/").str[1].str.split(" ").str[0] site_user_rating_4tv = tv_df_tidy_1.RATING.str.split("/").str[0] site_number_user_rated_4tv = tv_df_tidy_1.RATING.str.split("/").str[1].str.split(" "...
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Assign New cols and Drop the olds
tv_df_tidy_2 = tv_df_tidy_1.copy() movie_df_tidy_2= movie_df_tidy_1.copy() movie_df_tidy_2['User_Reviews_local'] = site_user_rating_4movie movie_df_tidy_2['Number_Reviews_local'] = site_number_user_rated_4movie tv_df_tidy_2['User_Reviews_local'] = site_user_rating_4tv tv_df_tidy_2['Number_Reviews_local'] = site_number_...
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Missing Vlaues
print(movie_df_tidy_2.info()) print("**"*20) print(tv_df_tidy_2.info())
<class 'pandas.core.frame.DataFrame'> DatetimeIndex: 3790 entries, 2019-04-22 to 2007-02-09 Data columns (total 11 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 MOVIE_NAME 3790 non-null object 1 WATCH_LINK 3790 non-null ...
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
It seems only movies has null vaules, lets dive deeper.
movie_df_tidy_2[movie_df_tidy_2.GENRE.isnull()]
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Earlier to prevent prolongation of crawling, we returned nan for bad requests. We can individually go throguh each link to values but lets drop them for now.
movie_df_tidy_2.dropna(inplace=True,axis=0)
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Write file for analysis part Index false argument on write will remove date index so lets not do that.
movie_df_tidy_2.to_csv('../Data/Movie.csv') tv_df_tidy_2.to_csv('../Data/TV.csv')
_____no_output_____
MIT
Files/.ipynb_checkpoints/fmovies_tidy-checkpoint.ipynb
nibukdk/web-scrapping-fmovie.to
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. ![Impressions](https://PixelServer20190423114238.azurewebsites.net/api/impressions/MachineLearningNotebooks/configuration.png) Configuration_**Setting up your Azure Machine Learning services workspace and configuring your notebook...
import azureml.core print("This notebook was created using version 1.0.48 of the Azure ML SDK") print("You are currently using version", azureml.core.VERSION, "of the Azure ML SDK")
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
If you are using an older version of the SDK then this notebook was created using, you should upgrade your SDK. 3. Azure Container Instance registrationAzure Machine Learning uses of [Azure Container Instance (ACI)](https://azure.microsoft.com/services/container-instances) to deploy dev/test web services. An Azure subs...
import os subscription_id = os.getenv("SUBSCRIPTION_ID", default="<my-subscription-id>") resource_group = os.getenv("RESOURCE_GROUP", default="<my-resource-group>") workspace_name = os.getenv("WORKSPACE_NAME", default="<my-workspace-name>") workspace_region = os.getenv("WORKSPACE_REGION", default="eastus2")
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
Access your workspaceThe following cell uses the Azure ML SDK to attempt to load the workspace specified by your parameters. If this cell succeeds, your notebook library will be configured to access the workspace from all notebooks using the `Workspace.from_config()` method. The cell can fail if the specified worksp...
from azureml.core import Workspace try: ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name) # write the details of the workspace to a configuration file to the notebook library ws.write_config() print("Workspace configuration succeeded. Sk...
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
Create a new workspaceIf you don't have an existing workspace and are the owner of the subscription or resource group, you can create a new workspace. If you don't have a resource group, the create workspace command will create one for you using the name you provide.**Note**: As with other Azure services, there are l...
from azureml.core import Workspace # Create the workspace using the specified parameters ws = Workspace.create(name = workspace_name, subscription_id = subscription_id, resource_group = resource_group, location = workspace_region, ...
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
Create compute resources for your training experimentsMany of the sample notebooks use Azure ML managed compute (AmlCompute) to train models using a dynamically scalable pool of compute. In this section you will create default compute clusters for use by the other notebooks and any other operations you choose.To creat...
from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your CPU cluster cpu_cluster_name = "cpu-cluster" # Verify that cluster does not exist already try: cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name) pri...
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
To create a **GPU** cluster, run the cell below. Note that your subscription must have sufficient quota for GPU VMs or the command will fail. To increase quota, see [these instructions](https://docs.microsoft.com/en-us/azure/azure-supportability/resource-manager-core-quotas-request).
from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException # Choose a name for your GPU cluster gpu_cluster_name = "gpu-cluster" # Verify that cluster does not exist already try: gpu_cluster = ComputeTarget(workspace=ws, name=gpu_cluster_name) pri...
_____no_output_____
MIT
configuration.ipynb
mesameki/MachineLearningNotebooks
Importing the images into this script
import os import numpy as np directory = 'C:/Users/joaovitor/Desktop/Meu_Canal/DINO/' jump_img = os.listdir(os.path.join(directory, 'jump')) nojump_img = os.listdir(os.path.join(directory, 'no_jump')) #checking if the number of images in both directories are equals print(len(jump_img) == len(nojump_img)) print(len(ju...
False 81
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Storing the images array into lists
import cv2 imgs_list_jump = [] imgs_list_nojump = [] for img in jump_img: images = cv2.imread(os.path.join(directory, 'jump', img), 0) #0 to convert the image to grayscale imgs_list_jump.append(images) for img in nojump_img: images = cv2.imread(os.path.join(directory, 'no_jump', img), 0) #0 to conver...
[[0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] ... [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0] [0 0 0 ... 0 0 0]] ================================================== Images Dimensions: (480, 640)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Let's display the first image
import matplotlib.pyplot as plt img = cv2.cvtColor(imgs_list_jump[0], cv2.COLOR_BGR2RGB) plt.imshow(img) plt.show()
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
The images have 480 pixels height and 640 pixels width
print(imgs_list_jump[0].shape)
(480, 640)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
The images sizes still very big, so we are going to resize all images in order to make them smaller
print('Original size:', imgs_list_jump[0].size) #original size
Original size: 307200
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
We will apply the code bellow to all images
scale_percent = 20 #20 percent of original size width = int(imgs_list_jump[0].shape[1] * scale_percent / 100) height = int(imgs_list_jump[0].shape[0] * scale_percent / 100) dim = (width, height) #resize image resized = cv2.resize(imgs_list_jump[0], dim, interpolation = cv2.INTER_AREA) print('Original Dimensions:', ...
Original Dimensions: (480, 640) Resized Dimensions: (96, 128)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Applying to all images
scale_percent = 20 # 20 percent of original size resized_jump_list = [] resized_nojump_list = [] for img in imgs_list_jump: width = int(img.shape[1] * scale_percent / 100) height = int(img.shape[0] * scale_percent / 100) dim = (width, height) #resize image resized = cv2.resize(img, dim, interpola...
(96, 128) (96, 128)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Creating my X dataset
nojump_list_reshaped = [] jump_list_reshaped = [] for img in resized_nojump_list: nojump_list_reshaped.append(img.reshape(-1, img.size)) for img in resized_jump_list: jump_list_reshaped.append(img.reshape(-1, img.size)) X_nojump = np.array(nojump_list_reshaped).reshape(len(nojump_list_reshaped), nojump_list_...
(386, 12288) (81, 12288)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Joining both X's
X = np.vstack([X_nojump, X_jump]) print(X.shape)
(467, 12288)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Creating my Y dataset
y_nojump = np.array([0 for i in range(len(nojump_list_reshaped))]).reshape(len(nojump_list_reshaped),-1) y_jump = np.array([1 for i in range(len(jump_list_reshaped))]).reshape(len(jump_list_reshaped),-1)
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Joining both Y's
y = np.vstack([y_nojump, y_jump]) print(y.shape)
(467, 1)
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Shuffling both datasets
shuffle_index = np.random.permutation(y.shape[0]) #print(shuffle_index) X, y = X[shuffle_index], y[shuffle_index]
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Creating a X_train and y_train dataset
X_train = X y_train = y
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Choosing SVM (Support Vector Machine) as our Machine Learning model
from sklearn.svm import SVC svm_clf = SVC(kernel='linear') svm_clf.fit(X_train, y_train.ravel())
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Creating a confusion matrix to evaluate the model performance
from sklearn.model_selection import cross_val_predict from sklearn.metrics import confusion_matrix y_train_pred = cross_val_predict(svm_clf, X_train, y_train.ravel(), cv=3) #sgd_clf no primeiro parametro confusion_matrix(y_train.ravel(), y_train_pred)
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Saving the model
import joblib joblib.dump(svm_clf, 'jump_model.pkl') #sgd_clf no primeiro parametro
_____no_output_____
MIT
Pygame-master/Chrome_Dinosaur_Game/MACHINE_LEARNING.ipynb
professorjar/curso-de-jogos-
Reflect Tables into SQLAlchemy ORM
# Python SQL toolkit and Object Relational Mapper import sqlalchemy from sqlalchemy.ext.automap import automap_base from sqlalchemy.orm import Session from sqlalchemy import create_engine, func # create engine to hawaii.sqlite engine = create_engine("sqlite:///hawaii.sqlite") # reflect an existing database into a new m...
_____no_output_____
ADSL
climate_starter.ipynb
tanmayrp/sqlalchemy-challenge
Exploratory Precipitation Analysis
# Find the most recent date in the data set. most_recent_date_str = session.query(Measurement.date).order_by(Measurement.date.desc()).first() print(f"The most recent date in the data set: {most_recent_date_str[0]}") # Design a query to retrieve the last 12 months of precipitation data and plot the results. # Starting ...
_____no_output_____
ADSL
climate_starter.ipynb
tanmayrp/sqlalchemy-challenge
Exploratory Station Analysis
# Design a query to calculate the total number stations in the dataset print(f"The number of stations in the dataset: {session.query(Station.id).count()} "); # Design a query to find the most active stations (i.e. what stations have the most rows?) # List the stations and the counts in descending order. most_active_sta...
_____no_output_____
ADSL
climate_starter.ipynb
tanmayrp/sqlalchemy-challenge
Close session
# Close Session session.close()
_____no_output_____
ADSL
climate_starter.ipynb
tanmayrp/sqlalchemy-challenge
Binary Logistic RegressionLet $X$ training input of size $n * p$. It contains $n$ examples, each with $p$ features. Let $y$ training target of size $n$. Each input $X_i$, vector of size $p$, is associated with it's target, $y_i$, which is $0$ or $1$. Logistic regression tries to fit a linear model to predict the t...
def sigmoid(x): return 1 / (1 + np.exp(-x)) y_out = np.random.randn(13).astype(np.float32) y_true = np.random.randint(0, 2, (13)).astype(np.float32) y_pred = sigmoid(y_out) j = - np.sum(y_true * np.log(y_pred) + (1-y_true) * np.log(1-y_pred)) ty_true = torch.tensor(y_true, requires_grad=False) ty_pred = torch.ten...
[-1.6231388 -2.9766939 2.274354 -6.4779763 -1.4708843 1.2155157 -1.9948862 1.8867183 1.4462028 18.669147 1.5500078 -1.6234685 -1.3342199] [-1.6231389 -2.976694 2.274354 -6.477976 -1.4708843 1.2155157 -1.9948862 1.8867184 1.4462028 18.669147 1.5500077 -1.6234685 -1.3342199] 5.717077e-07
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
$$\frac{\partial J(\beta)}{\partial o_i} = \hat{y_i} - y_i$$$$\frac{\partial J(\beta)}{\partial o} = \hat{y} - y$$
y_out = np.random.randn(13).astype(np.float32) y_true = np.random.randint(0, 2, (13)).astype(np.float32) y_pred = sigmoid(y_out) j = - np.sum(y_true * np.log(y_pred) + (1-y_true) * np.log(1-y_pred)) ty_true = torch.tensor(y_true, requires_grad=False) ty_out = torch.tensor(y_out, requires_grad=True) criterion = torch.n...
[-0.7712122 0.5310385 -0.7378207 -0.13447696 0.20648097 0.28622478 -0.7465389 0.5608791 0.53383535 -0.75912154 -0.4418677 0.6848638 0.35961235] [-0.7712122 0.5310385 -0.7378207 -0.13447696 0.20648097 0.28622478 -0.7465389 0.5608791 0.53383535 -0.75912154 -0.4418677 0.6848638 0.35961235] 0....
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
Can be trained with gradient descent
def log_reg_sk(X, y): m = LogisticRegression(fit_intercept=False) m.fit(X, y) return m.coef_ def get_error(X, y, w): y_pred = sigmoid(X @ w) err = - np.sum(y * np.log(y_pred) + (1-y) * np.log(1-y_pred)) return err def log_reg(X, y): w = np.random.randn(X.shape[1]) for epoch ...
SGD Error = 71.14744133609668 SGD Error = 49.65028785288255 SGD Error = 48.91772028291884 SGD Error = 48.888462052036814 SGD Error = 48.88680421514018 SGD Error = 48.88669058552164 SGD Error = 48.88668168135676 SGD Error = 48.886680916022215 SGD Error = 48.88668084643879 SGD Error = 48.88668083991474 SGD Error = 48.886...
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
Multiclass Logistic Regression
def softmax(x): x_e = np.exp(x) return x_e / np.sum(x_e, axis=1, keepdims=True) y_out = np.random.randn(93, 4).astype(np.float32) y_true = np.zeros((93, 4)).astype(np.float32) for i in range(y_true.shape[0]): y_true[i][np.random.randint(0, y_true.shape[1])] = 1 y_pred = softmax(y_out) j = - np.sum(y_true *...
[[ -0. -10.283339 -0. -0. ] [-10.58094 -0. -0. -0. ] [ -0. -0. -2.7528124 -0. ] [-46.90987 -0. -0. -0. ] [ -0. -0. -1.3170731 -0. ] [ -7.9531765 -0. -0. -0. ] [ -0. ...
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
$$\frac{\partial J(\beta)}{\partial o_{ij}} = \hat{y_{ij}} - y_{ij}$$$$\frac{\partial J(\beta)}{\partial o} = \hat{y} - y$$
y_out = np.random.randn(7, 4).astype(np.float32) y_true = np.zeros((7, 4)).astype(np.float32) for i in range(y_true.shape[0]): y_true[i][np.random.randint(0, y_true.shape[1])] = 1 y_pred = softmax(y_out) j = - np.sum(y_true * np.log(y_pred)) ty_true = torch.tensor(y_true, requires_grad=False) ty_true = torch.argm...
[[-0.71088123 0.25399554 0.31700996 0.13987577] [ 0.02140404 0.3097546 0.29681578 -0.6279745 ] [ 0.60384715 0.03253903 0.0066169 -0.6430031 ] [ 0.22169167 -0.88766754 0.03120301 0.63477284] [ 0.05100057 -0.38170385 0.10363309 0.22707026] [ 0.02778155 0.6928965 -0.8194856 0.09880757] [ 0.03780703 ...
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
Can be trained with gradient descent
def get_error_multi(X, y, w): y_pred = softmax(X @ w) err = - np.sum(y * np.log(y_pred)) return err def multilog_reg(X, y): w = np.random.randn(X.shape[1], y.shape[1]) for epoch in range(10000): y_pred = softmax(X @ w) dy_out = y_pred - y dw = X.T @ dy_out ...
SGD Error = 264.5967568728954 SGD Error = 124.52928999771657 SGD Error = 120.69338069535253 SGD Error = 120.60511291188504 SGD Error = 120.60208822782775 SGD Error = 120.60195961583351 SGD Error = 120.60195360857097 SGD Error = 120.60195331813674 SGD Error = 120.60195330392729 SGD Error = 120.60195330322918 SGD Error =...
MIT
courses/ml/logistic_regression.ipynb
obs145628/ml-notebooks
Scaffolds of Keck_Pria_FP_data
Target_name = 'Keck_Pria_FP_data' smiles_list = [] for i in range(k): smiles_list.extend(data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist()) scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.Mol...
Original SMILES is c1cc(cc2c1CCCN2CCOC)NS(=O)(=O)c3c(c(c(c(c3C)C)C)C)C The Scaffold is O=S(=O)(Nc1ccc2c(c1)NCCC2)c1ccccc1 Original SMILES is c1cc(ccc1CC)NC(=O)CSc2ncc(c(=O)[nH]2)S(=O)(=O)c3ccc(cc3C)C The Scaffold is O=C(CSc1ncc(S(=O)(=O)c2ccccc2)c(=O)[nH]1)Nc1ccccc1 Original SMILES is c1ccc2c(c1)c(c[nH]2)CCNC(=O...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Below is scaffold for each fold Scaffold for fold 0
i = 0 smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist() scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.MolToSmiles(core) scaffold_set.add(scaffold) print 'Original SMILES i...
Original SMILES is c1cc(cc2c1CCCN2CCOC)NS(=O)(=O)c3c(c(c(c(c3C)C)C)C)C The Scaffold is O=S(=O)(Nc1ccc2c(c1)NCCC2)c1ccccc1 Original SMILES is c1cc(ccc1CC)NC(=O)CSc2ncc(c(=O)[nH]2)S(=O)(=O)c3ccc(cc3C)C The Scaffold is O=C(CSc1ncc(S(=O)(=O)c2ccccc2)c(=O)[nH]1)Nc1ccccc1 Original SMILES is c1ccc2c(c1)c(c[nH]2)CCNC(=O...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Scaffold for fold 1
i = 1 smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist() scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.MolToSmiles(core) scaffold_set.add(scaffold) print 'Original SMILES i...
Original SMILES is c1cc(cc(c1)Cl)Nc2nc(cs2)CC(=O)Nc3ccc4c(c3)OCCO4 The Scaffold is O=C(Cc1csc(Nc2ccccc2)n1)Nc1ccc2c(c1)OCCO2 Original SMILES is c1cc(cc(c1NC(=O)c2c(nns2)C)[N+](=O)[O-])OCC The Scaffold is O=C(Nc1ccccc1)c1cnns1 Original SMILES is c1ccc2c(c1)ccn2CCNC(=S)NCCc3cc4ccc(cc4[nH]c3=O)C The Scaffold is O=...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Scaffold for fold 2
i = 2 smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist() scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.MolToSmiles(core) scaffold_set.add(scaffold) print 'Original SMILES i...
Original SMILES is c1ccc(c(c1)C(=O)Nc2nnc(o2)Cc3cccs3)SCC The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1 Original SMILES is c1cc(ccc1n2ccnc2SCC(=O)Nc3ccc(cc3)Br)F The Scaffold is O=C(CSc1nccn1-c1ccccc1)Nc1ccccc1 Original SMILES is c1cc2c(cc1C(=O)NCc3ccc4c(c3)cc(n4C)C)OCO2 The Scaffold is O=C(NCc1ccc2[nH]ccc2c1...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Scaffold for fold 3
i = 3 smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist() scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.MolToSmiles(core) scaffold_set.add(scaffold) print 'Original SMILES i...
Original SMILES is c1cc(oc1)C(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4 The Scaffold is O=C(Nc1ccc(Nc2ccc(-n3cccn3)nn2)cc1)c1ccco1 Original SMILES is c1ccc(c(c1)C(=O)Nc2nc(cs2)c3ccccn3)Br The Scaffold is O=C(Nc1nc(-c2ccccn2)cs1)c1ccccc1 Original SMILES is c1ccc(cc1)C2=NN(C(C2)c3ccc4c(c3)nccn4)C(=O)c5cccs5 The Scaffold is...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Scaffold for fold 4
i = 4 smiles_list = data_pd_list[i][data_pd_list[i][Target_name]==1]['SMILES'].tolist() scaffold_set = set() for smiles in smiles_list: mol = Chem.MolFromSmiles(smiles) core = MurckoScaffold.GetScaffoldForMol(mol) scaffold = Chem.MolToSmiles(core) scaffold_set.add(scaffold) print 'Original SMILES i...
Original SMILES is c1cc(sc1)Cc2nnc(o2)NC(=O)c3ccc(cc3)S(=O)(=O)N(C)CCCC The Scaffold is O=C(Nc1nnc(Cc2cccs2)o1)c1ccccc1 Original SMILES is c1cc2cccnc2c(c1)SCC(=O)NCCc3ccc(cc3)Cl The Scaffold is O=C(CSc1cccc2cccnc12)NCCc1ccccc1 Original SMILES is c1cc(cc(c1)F)NC(=O)Nc2ccc(cc2)Nc3ccc(nn3)n4cccn4 The Scaffold is O...
MIT
pria_lifechem/analysis/scaffold/scaffold_Keck_Pria_FP_data.ipynb
chao1224/pria_lifechem
Задание 3.2 - сверточные нейронные сети (CNNs) в PyTorchЭто упражнение мы буде выполнять в Google Colab - https://colab.research.google.com/ Google Colab позволяет запускать код в notebook в облаке Google, где можно воспользоваться бесплатным GPU! Авторы курса благодарят компанию Google и надеятся, что праздник не з...
# Intstall PyTorch and download data !pip3 install torch torchvision !wget -c http://ufldl.stanford.edu/housenumbers/train_32x32.mat http://ufldl.stanford.edu/housenumbers/test_32x32.mat from collections import namedtuple import matplotlib.pyplot as plt import numpy as np import PIL import torch import torch.nn as nn...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Загружаем данные
# First, lets load the dataset data_train = dset.SVHN('./', transform=transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=[0.43,0.44,0.47], std=[0.20,0.20,0.20]) ...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Разделяем данные на training и validation.На всякий случай для подробностей - https://pytorch.org/tutorials/beginner/data_loading_tutorial.html
batch_size = 64 data_size = data_train.data.shape[0] validation_split = .2 split = int(np.floor(validation_split * data_size)) indices = list(range(data_size)) np.random.shuffle(indices) train_indices, val_indices = indices[split:], indices[:split] train_sampler = SubsetRandomSampler(train_indices) val_sampler = Sub...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai