markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Submit the pipeline for execution
pipeline = kfp.Client().create_run_from_pipeline_func(pipeline, arguments={})
_____no_output_____
Apache-2.0
samples/core/ai_platform/ai_platform.ipynb
magencio/pipelines
Wait for the pipeline to finish
pipeline.wait_for_run_completion(timeout=1800)
_____no_output_____
Apache-2.0
samples/core/ai_platform/ai_platform.ipynb
magencio/pipelines
Clean models
!gcloud ml-engine versions delete $MODEL_VERSION --model $MODEL_NAME !gcloud ml-engine models delete $MODEL_NAME
_____no_output_____
Apache-2.0
samples/core/ai_platform/ai_platform.ipynb
magencio/pipelines
!wget -qO- https://get.nextflow.io | bash !mv nextflow /usr/local/bin/ !pip install nf-core !nf-core download rnaseq --singularity from google.colab import files files.download("nf-core-rnaseq-1.4.2.tar.gz")
_____no_output_____
MIT
DownloadOfflineFiles.ipynb
johan-gson/nf-core_on_Bianca
Table of Contents [Install Monk](0) [Importing mxnet-gluoncv backend](1) [Creating and Managing experiments](2) [Training a Cat Vs Dog image classifier](3) [Validating the trained classifier](4) [Running inference on test images](5) Install Monk Using pip (Recommended) - colab (gpu) - All bakcends: `pip inst...
#Using mxnet-gluon backend # When installed using pip from monk.gluon_prototype import prototype # When installed manually (Uncomment the following) #import os #import sys #sys.path.append("monk_v1/"); #sys.path.append("monk_v1/monk/"); #from monk.gluon_prototype import prototype
_____no_output_____
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Creating and managing experiments - Provide project name - Provide experiment name - For a specific data create a single project - Inside each project multiple experiments can be created - Every experiment can be have diferent hyper-parameters attached to it
gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1");
Mxnet Version: 1.5.1 Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/ubuntu/Desktop/monk_pip_test/monk_v1/study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/workspace/sample-project-1/sample-experiment-1/
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
This creates files and directories as per the following structure workspace | |--------sample-project-1 (Project name can be different) | | |-----sample-experiment-1 (Experiment name can be different) ...
# Download dataset import os if not os.path.isfile("datazets.zip"): os.system("! wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1rG-U1mS8hDU7...
Training Start Epoch 1/5 ----------
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Validating the trained classifier Load the experiment in validation mode - Set flag eval_infer as True
gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1", eval_infer=True);
Mxnet Version: 1.5.1 Model Details Loading model - workspace/sample-project-1/sample-experiment-1/output/models/final-symbol.json Model loaded! Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/ubuntu/Desktop/monk_pip_test/monk_v1/study_roadmaps/1_getting_star...
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Load the validation dataset
gtf.Dataset_Params(dataset_path="datasets/dataset_cats_dogs_eval"); gtf.Dataset();
Dataset Details Test path: datasets/dataset_cats_dogs_eval CSV test path: None Dataset Params Input Size: 224 Processors: 8 Pre-Composed Test Transforms [{'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}] Dataset Numbers Num test images: 50 Num classes: ...
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Run validation
accuracy, class_based_accuracy = gtf.Evaluate();
Testing
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Running inference on test images Load the experiment in inference mode - Set flag eval_infer as True
gtf = prototype(verbose=1); gtf.Prototype("sample-project-1", "sample-experiment-1", eval_infer=True);
Mxnet Version: 1.5.1 Model Details Loading model - workspace/sample-project-1/sample-experiment-1/output/models/final-symbol.json Model loaded! Experiment Details Project: sample-project-1 Experiment: sample-experiment-1 Dir: /home/ubuntu/Desktop/monk_pip_test/monk_v1/study_roadmaps/1_getting_star...
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
Select image and Run inference
img_name = "datasets/dataset_cats_dogs_test/0.jpg"; predictions = gtf.Infer(img_name=img_name); #Display from IPython.display import Image Image(filename=img_name) img_name = "datasets/dataset_cats_dogs_test/90.jpg"; predictions = gtf.Infer(img_name=img_name); #Display from IPython.display import Image Image(filen...
Prediction Image name: datasets/dataset_cats_dogs_test/90.jpg Predicted class: dog Predicted score: 0.720231831073761
Apache-2.0
study_roadmaps/1_getting_started_roadmap/1_getting_started_with_monk/1) Dog Vs Cat Classifier Using Mxnet-Gluon Backend.ipynb
take2rohit/monk_v1
StackA stack (sometimes called a **“push-down stack”**) is an ordered collection of items where the addition of new items and the removal of existing items always takes place at the same end. This end is commonly referred to as the **"top"**. The end opposite the top is known as the **“base”**.The base of the stack is...
class Stack: def __init__(self): self.items = [] def push(self, item): self.items.append(item) def pop(self): return self.items.pop() def peek(self): return self.items[-1] def is_empty(self): return self.items == [] def size(self): return...
True dog 3 False 8.4 True 2
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Practice Problems Write a function `reverse_string(mystr)` that reverses the characters in a string.
def reverse_string(mystr): s = Stack() for ch in mystr: s.push(ch) reverse_string = "" while not s.is_empty(): reverse_string += s.pop() return reverse_string print(reverse_string('0123456789')) print(reverse_string('apple')) print(reverse_string('Take me down to the ...
9876543210 elppa ... ytic esidarap eht ot nwod em ekaT
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Write a function `parentheses_checker(expression)` that tells if the expression has balanced parentheses.
def parentheses_checker(expression): opening_parentheses = ["(", "[", "{"] closing_parentheses = [")", "]", "}"] s = Stack() balanced = True for ch in expression: if ch in opening_parentheses: s.push(ch) else: if s.is_empty(): balance...
True False True False
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Write a function `base_convert(number, base)` that converts decimal integers to integer in another base system (upto 16).
def base_convert(number, base): digits = "0123456789ABCDEF" remainders = Stack() while number: remainders.push(number % base) number = number // base converted_number = "" while not remainders.is_empty(): digit = digits[remainders.pop()] converted_number += digit...
101010 222 52 2A 11111001011 133023 3713 7CB
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Write a function `infix_to_postfix(expression)` that converts an expresion from infix notation to postfix notation.1. Create an empty stack called opstack for keeping operators. Create an empty list for output.2. Convert the input infix string to a list by using the string method split.3. Scan the token list from left...
def infix_to_postfix(expression): precedence = { "^": 4, "*": 3, "/": 3, "+": 2, "-": 2, "(": 1 } postfix_list = [] op_stack = Stack() token_list = expression.split() for token in token_list: if token.isdigit() or token.isalpha(): ...
A B * C D * + A B + C * D E - F G + * - 1 2 + 3 4 * 5 / - 10 3 5 * 16 4 - / + 5 3 4 2 - ^ *
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Write a function `evaluate_postfix(expression)` that evaluates an expresion in postfix notation.Assume the postfix expression is a string of tokens delimited by spaces. The operators are *, /, +, and - and the operands are assumed to be single-digit integer values. The output will be an integer result.1. Create an emp...
from operator import add, sub, mul, truediv, mod, pow def evaluate_postfix(expression): operators = { "+": add, "-": sub, "*": mul, "/": truediv, "^": pow, } operands = Stack() token_list = expression.split() for token in token_list: if token.isdigi...
3.0 5.0
MIT
data-structures/stack.ipynb
RatanShreshtha/Crash-Course-Computer-Science
Excercise 1 Taks 7 Keras and deep dreaming The deep dreaming script was executed within an anaconda environment with the following packages- python 3.5- keras 2.0.2- tensorflow 1.0- pillow 4.0.0 Visualizing image results through parameter changes in deep_dream.py script Original to be transformed image because my ow...
# First script run with original parameters %run deep_dream.py img/std_o.jpg img/std_1.jpg
Using TensorFlow backend.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
After execution with original parameter we've got this result :-)![title](img/std_1.jpg.png) It seems to recognize cats and fishes? -> Lets Play with the parameters and generate some more images 2. Run with double step size (0.02)
%run deep_dream.py img/std_o.jpg img/std_2.png
Using TensorFlow backend.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
![title](img/std_2.png)The structures seem to be more coarse than in the original image 3. Run with the following parameters- step = 0.02 Gradient ascent step size- num_octave = 4 (changed from 3 to 4) Number of scales at which to run gradient ascent- octave_scale = 1.4 Size ratio between scales- iterations = 20 ...
%run deep_dream.py img/std_o.jpg img/std_3
Model loaded.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
![title](img/std_3.png)Change of octave does not lead to changed visualization 4. Run with the following parameters- step = 0.02 Gradient ascent step size- num_octave = 4 Number of scales at which to run gradient ascent- octave_scale = 2.0 (1.4 -> 2.0) Size ratio between scales- iterations = 30 (20 -> 30 Number ...
%run deep_dream.py img/std_o.jpg img/std_4
Model loaded.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
![title](img/std_4.png) 5. Run with the following parameters- step = 0.02 Gradient ascent step size- num_octave = 4 Number of scales at which to run gradient ascent- octave_scale = 1.0 Size ratio between scales - iterations = 30 Number of ascent steps per scale- max_loss = 10.
%run deep_dream.py img/std_o.jpg img/std_5
Model loaded.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
![title](img/std_5.png)Octave scale seems to have the most effect on image structure granularity and together with step the most effect on run time behaviour 6. Run with the following parameters- step = 0.01 Gradient ascent step size- num_octave = 4 Number of scales at which to run gradient ascent- octave_scale = ...
%run deep_dream.py img/std_o.jpg std_6
Model loaded.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
![title](img/std_6.png)Reducing step and octave_scale leads to fine granular image representation but to very long run times 7. Run with the following parameters- step = 0.015 Gradient ascent step size- num_octave = 3 Number of scales at which to run gradient ascent- octave_scale = 1.0 Size ratio between scales-...
%run deep_dream.py img/std_o.jpg img/std_7
Model loaded.
MIT
notebooks/henrik_ueb01/07_Keras.ipynb
hhain/sdap17
Dependencies
from utillity_script_cloud_segmentation import * seed = 0 seed_everything(seed) warnings.filterwarnings("ignore") base_path = '/content/drive/My Drive/Colab Notebooks/[Kaggle] Understanding Clouds from Satellite Images/' data_path = base_path + 'Data/' model_base_path = base_path + 'Models/files/classification/' model_...
_____no_output_____
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Load data
hold_out_set = pd.read_csv(hold_out_set_path) X_val = hold_out_set[hold_out_set['set'] == 'validation'] print('Validation samples: ', len(X_val)) # Preprocecss data label_columns=['Fish', 'Flower', 'Gravel', 'Sugar'] for label in label_columns: X_val[label].replace({0: 1, 1: 0}, inplace=True) display(X_val.head())
Validation samples: 1105
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Model parameters
HEIGHT = 224 WIDTH = 224 BETA1 = 0.25 BETA2 = 0.5 BETA3 = 1 TTA_STEPS = 8
_____no_output_____
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Model
model = load_model(model_path)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4479: The name tf.truncated_norm...
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Classification data generator
datagen=ImageDataGenerator(rescale=1./255., vertical_flip=True, horizontal_flip=True, zoom_range=[1, 1.1], shear_range=45.0, rotation_range=360, width_shift_r...
Found 1105 validated image filenames.
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Classification threshold and mask size tunning
valid_preds = apply_tta(model, valid_generator, steps=TTA_STEPS) print('BETA1') best_tresholds1 = classification_tunning(valid_generator.labels, valid_preds, label_columns, beta=BETA1) print('BETA2') best_tresholds2 = classification_tunning(valid_generator.labels, valid_preds, label_columns, beta=BETA2) print('BETA3') ...
BETA1 Fish treshold=0.73 Score=0.763 Flower treshold=0.66 Score=0.925 Gravel treshold=0.76 Score=0.707 Sugar treshold=0.58 Score=0.685 BETA2 Fish treshold=0.62 Score=0.684 Flower treshold=0.54 Score=0.876 Gravel treshold=0.56 Score=0.684 Sugar treshold=0.53 Score=0.619 BETA3 Fish treshold=0.40 Score=0.720 Flower tresho...
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Metrics (beta 0.25)
get_metrics_classification(X_val, valid_preds, label_columns, best_tresholds1)
Metrics for: Fish precision recall f1-score support 0 0.57 0.96 0.71 555 1 0.87 0.26 0.40 550 accuracy 0.61 1105 macro avg 0.72 0.61 0.55 1105 weighted avg 0.72 0...
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Metrics (beta 0.5)
get_metrics_classification(X_val, valid_preds, label_columns, best_tresholds2)
Metrics for: Fish precision recall f1-score support 0 0.63 0.84 0.72 555 1 0.75 0.50 0.60 550 accuracy 0.67 1105 macro avg 0.69 0.67 0.66 1105 weighted avg 0.69 0...
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Metrics (beta 1)
get_metrics_classification(X_val, valid_preds, label_columns, best_tresholds3)
Metrics for: Fish precision recall f1-score support 0 0.79 0.44 0.56 555 1 0.61 0.88 0.72 550 accuracy 0.66 1105 macro avg 0.70 0.66 0.64 1105 weighted avg 0.70 0...
MIT
Model backlog/Evaluation/classification/Google Colab/25-EfficientNetB0_224x224_Cyclical_triangular.ipynb
kurkutesa/Machine_Learning_Clouds_and_Satellite_Images
Example: CanvasXpress violin Chart No. 16This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/violin-16.htmlThis example is generated using the reproducible JSON obtained from the above page and the...
from canvasxpress.canvas import CanvasXpress from canvasxpress.js.collection import CXEvents from canvasxpress.render.jupyter import CXNoteBook cx = CanvasXpress( render_to="violin16", data={ "y": { "smps": [ "Var1", "Var2", "Var3", ...
_____no_output_____
MIT
tutorials/notebook/cx_site_chart_examples/violin_16.ipynb
docinfosci/canvasxpress-python
TensorFlow Tutorial 01 Simple Linear Modelby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) IntroductionThis tutorial demonstrates the basic workflow ...
%matplotlib inline import matplotlib.pyplot as plt import tensorflow as tf import numpy as np from sklearn.metrics import confusion_matrix
/home/magnus/anaconda3/envs/tf-gpu/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_con...
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
tf.__version__
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
from mnist import MNIST data = MNIST(data_dir="data/MNIST/")
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
The MNIST data-set has now been loaded and consists of 70.000 images and class-numbers for the images. The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
print("Size of:") print("- Training-set:\t\t{}".format(data.num_train)) print("- Validation-set:\t{}".format(data.num_val)) print("- Test-set:\t\t{}".format(data.num_test))
Size of: - Training-set: 55000 - Validation-set: 5000 - Test-set: 10000
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Copy some of the data-dimensions for convenience.
# The images are stored in one-dimensional arrays of this length. img_size_flat = data.img_size_flat # Tuple with height and width of images used to reshape arrays. img_shape = data.img_shape # Number of classes, one class for each of 10 digits. num_classes = data.num_classes
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
One-Hot Encoding The output-data is loaded as both integer class-numbers and so-called One-Hot encoded arrays. This means the class-numbers have been converted from a single integer to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which i...
data.y_test[0:5, :]
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
We also need the classes as integers for various comparisons and performance measures. These can be found from the One-Hot encoded arrays by taking the index of the highest element using the `np.argmax()` function. But this has already been done for us when the data-set was loaded, so we can see the class-number for th...
data.y_test_cls[0:5]
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
def plot_images(images, cls_true, cls_pred=None): assert len(images) == len(cls_true) == 9 # Create figure with 3x3 sub-plots. fig, axes = plt.subplots(3, 3) fig.subplots_adjust(hspace=0.3, wspace=0.3) for i, ax in enumerate(axes.flat): # Plot image. ax.imshow(images[i].reshape...
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Plot a few images to see if data is correct
# Get the first images from the test-set. images = data.x_test[0:9] # Get the true classes for those images. cls_true = data.y_test_cls[0:9] # Plot the images and labels using our helper-function above. plot_images(images=images, cls_true=cls_true)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be ex...
x = tf.placeholder(tf.float32, [None, img_size_flat])
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in th...
y_true = tf.placeholder(tf.float32, [None, num_classes])
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Finally we have the placeholder variable for the true class of each image in the placeholder variable `x`. These are integers and the dimensionality of this placeholder variable is set to `[None]` which means the placeholder variable is a one-dimensional vector of arbitrary length.
y_true_cls = tf.placeholder(tf.int64, [None])
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Variables to be optimized Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.The first variable that must be optimized is ca...
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
The second variable that must be optimized is called `biases` and is defined as a 1-dimensional tensor (or vector) of length `num_classes`.
biases = tf.Variable(tf.zeros([num_classes]))
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Model This simple mathematical model multiplies the images in the placeholder variable `x` with the `weights` and then adds the `biases`.The result is a matrix of shape `[num_images, num_classes]` because `x` has shape `[num_images, img_size_flat]` and `weights` has shape `[img_size_flat, num_classes]`, so the multipl...
logits = tf.matmul(x, weights) + biases
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Now `logits` is a matrix with `num_images` rows and `num_classes` columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or l...
y_pred = tf.nn.softmax(logits)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
The predicted class can be calculated from the `y_pred` matrix by taking the index of the largest element in each row.
y_pred_cls = tf.argmax(y_pred, axis=1)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Cost-function to be optimized To make the model better at classifying the input images, we must somehow change the variables for `weights` and `biases`. To do this we first need to know how well the model currently performs by comparing the predicted output of the model `y_pred` to the desired output `y_true`.The cros...
cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y_true)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cros...
cost = tf.reduce_mean(cross_entropy)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Optimization method Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-ob...
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(cost)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Performance measures We need a few more performance measures to display the progress to the user.This is a vector of booleans whether the predicted class equals the true class of each image.
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
session = tf.Session()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Initialize variablesThe variables for `weights` and `biases` must be initialized before we start optimizing them.
session.run(tf.global_variables_initializer())
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Helper-function to perform optimization iterations There are 55.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
batch_size = 100
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Function for performing a number of optimization iterations so as to gradually improve the `weights` and `biases` of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
def optimize(num_iterations): for i in range(num_iterations): # Get a batch of training examples. # x_batch now holds a batch of images and # y_true_batch are the true labels for those images. x_batch, y_true_batch, _ = data.random_batch(batch_size=batch_size) # Put ...
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Helper-functions to show performance Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
feed_dict_test = {x: data.x_test, y_true: data.y_test, y_true_cls: data.y_test_cls}
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Function for printing the classification accuracy on the test-set.
def print_accuracy(): # Use TensorFlow to compute the accuracy. acc = session.run(accuracy, feed_dict=feed_dict_test) # Print the accuracy. print("Accuracy on test-set: {0:.1%}".format(acc))
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Function for printing and plotting the confusion matrix using scikit-learn.
def print_confusion_matrix(): # Get the true classifications for the test-set. cls_true = data.y_test_cls # Get the predicted classifications for the test-set. cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test) # Get the confusion matrix using sklearn. cm = confusion_matrix(y_tru...
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Function for plotting examples of images from the test-set that have been mis-classified.
def plot_example_errors(): # Use TensorFlow to get a list of boolean values # whether each test-image has been correctly classified, # and a list for the predicted class of each image. correct, cls_pred = session.run([correct_prediction, y_pred_cls], feed_dict=feed_di...
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Helper-function to plot the model weights Function for plotting the `weights` of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
def plot_weights(): # Get the values for the weights from the TensorFlow variable. w = session.run(weights) # Get the lowest and highest values for the weights. # This is used to correct the colour intensity across # the images so they can be compared with each other. w_min = np.min(w) ...
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Performance before any optimizationThe accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero ...
print_accuracy() plot_example_errors()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Performance after 1 optimization iterationAlready after a single optimization iteration, the model has increased its accuracy on the test-set significantly.
optimize(num_iterations=1) print_accuracy() plot_example_errors()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reac...
plot_weights()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Performance after 10 optimization iterations
# We have already performed 1 iteration. optimize(num_iterations=9) print_accuracy() plot_example_errors() plot_weights()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Performance after 1000 optimization iterationsAfter 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and ...
# We have already performed 10 iterations. optimize(num_iterations=990) print_accuracy() plot_example_errors()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether ...
plot_weights()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly as 6 or 8.
print_confusion_matrix()
[[ 956 0 3 1 1 4 11 3 1 0] [ 0 1114 2 2 1 2 4 2 8 0] [ 6 8 925 23 11 3 13 12 26 5] [ 3 1 19 928 0 34 2 10 5 8] [ 1 3 4 2 918 2 11 2 6 33] [ 8 3 7 36 8 781 15 6 20 8] [...
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
We are now done using TensorFlow, so we close the session to release its resources.
# This has been commented out in case you want to modify and experiment # with the Notebook without having to restart it. # session.close()
_____no_output_____
MIT
01_Simple_Linear_Model.ipynb
tri-water/TensorFlow-Tutorials
Deferred Initialization:label:`sec_deferred_init`So far, it might seem that we got awaywith being sloppy in setting up our networks.Specifically, we did the following unintuitive things,which might not seem like they should work:* We defined the network architectures without specifying the input dimensionality.* We a...
import tensorflow as tf net = tf.keras.models.Sequential([ tf.keras.layers.Dense(256, activation=tf.nn.relu), tf.keras.layers.Dense(10), ])
_____no_output_____
MIT
d2l-en/tensorflow/chapter_deep-learning-computation/deferred-init.ipynb
gr8khan/d2lai
At this point, the network cannot possibly knowthe dimensions of the input layer's weightsbecause the input dimension remains unknown.Consequently the framework has not yet initialized any parameters.We confirm by attempting to access the parameters below.
[net.layers[i].get_weights() for i in range(len(net.layers))]
_____no_output_____
MIT
d2l-en/tensorflow/chapter_deep-learning-computation/deferred-init.ipynb
gr8khan/d2lai
Note that each layer objects exist but the weights are empty.Using `net.get_weights()` would throw an error since the weightshave not been initialized yet. Next let us pass data through the networkto make the framework finally initialize parameters.
X = tf.random.uniform((2, 20)) net(X) [w.shape for w in net.get_weights()]
_____no_output_____
MIT
d2l-en/tensorflow/chapter_deep-learning-computation/deferred-init.ipynb
gr8khan/d2lai
Control Client
# export import httpx from pydantic import BaseModel from urllib.parse import urljoin from will_it_saturate.files import BenchmarkFile from will_it_saturate.hosts import Host, HostDetails from will_it_saturate.registry import ModelParameters class ControlClient(BaseModel): host: Host timeout: int = 60 ...
_____no_output_____
Apache-2.0
31_control_client.ipynb
ephes/will_it_saturate
Usage
# dont_test host = Host(name="localhost", port=8001) client = ControlClient(host=host) print(client.host.port) print(client.get_host_details().machine_id)
8001 C02DR0MZQ6LT
Apache-2.0
31_control_client.ipynb
ephes/will_it_saturate
Tests
test_host = Host(name="foobar", port=8001) test_client = ControlClient(host=test_host) assert "foobar" in test_client.base_url assert "8001" in test_client.base_url # hide # dont_test from nbdev.export import notebook2script notebook2script()
Converted 00_index.ipynb. Converted 01_host.ipynb. Converted 02_file.ipynb. Converted 03_registry.ipynb. Converted 04_epochs.ipynb. Converted 10_servers.ipynb. Converted 11_views_for_fastapi_server.ipynb. Converted 12_views_for_django_server.ipynb. Converted 15_servers_started_locally.ipynb. Converted 16_servers_starte...
Apache-2.0
31_control_client.ipynb
ephes/will_it_saturate
CNN without regularization
clear_session() def create_cnn_model(): cnn_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='team') cnn_oppo_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='oppo_team') cnn_player_input = layers.Input(shape=(n_players_per_team_match,), dtyp...
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with input dtype object was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with inp...
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
CNN with dropout
def create_do_model(): do_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='team') do_oppo_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='oppo_team') do_player_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='play...
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with input dtype object was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with inp...
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
CNN with l2 regularization
def create_l2_model(): l2_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='team') l2_oppo_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='oppo_team') l2_player_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='play...
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with input dtype object was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with inp...
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
CNN performance by year
ridge_data = PlayerRidgeData(train_years=(None, 2016), test_years=(None, None)) ridge_X, ridge_y = ridge_data.train_data() ridge_estimator = PlayerRidge() def yearly_performance_scores(estimators, features, labels): model_names = [] errors = [] accuracies = [] years = [] for year in range(2011, 20...
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with input dtype object was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with inp...
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
Ridge with aggregation still has the best performanceThe basic & l2 regularized CNNs are comparable, but Ridge performs better on both MAE & accuracy for most years.
prediction_df = pd.read_csv('../data/model_predictions.csv') prediction_scores = (prediction_df[prediction_df['model'] != 'tipresias_player'] .groupby(['model', 'year']) .mean()['tip_point'] .reset_index() .rename(columns={'tip_point': ...
_____no_output_____
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
Basic RNN
rnn_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='team') rnn_oppo_team_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='oppo_team') rnn_player_input = layers.Input(shape=(n_players_per_team_match,), dtype='int32', name='player') rnn_stats_input = layers.In...
/usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with input dtype object was converted to float64 by StandardScaler. warnings.warn(msg, DataConversionWarning) /usr/local/lib/python3.6/site-packages/sklearn/utils/validation.py:590: DataConversionWarning: Data with inp...
MIT
notebooks/2019_season/5.4-player-data-cnn.ipynb
tipresias/augury
--> Forecasting - ATOM Master Degree Program in Data Science and Advanced Analytics Business Cases with Data Science Project: > Group AA Done by:> - Beatriz Martins Selidónio Gomes, m20210545> - Catarina Inês Lopes Garcez, m20210547 > - Diogo André Domingues Pires, m20201076 > - Rodrigo Faí...
import warnings warnings.filterwarnings('ignore') import pandas as pd import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Data Exploration and Understanding Initial Analysis (EDA - Exploratory Data Analysis) [Back to TOC](toc)
df = pd.read_csv('../data/data_aux/df_ATOM.csv') df
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Data Types
# Get to know the number of instances and Features, the DataTypes and if there are missing values in each Feature df.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 1826 entries, 0 to 1825 Data columns (total 7 columns): # Column Non-Null Count Dtype --- ------ -------------- ----- 0 Date 1826 non-null object 1 ATOM-USD_ADJCLOSE 1139 non-null float64 2 ATOM-USD_CLOSE ...
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Missing Values
# Count the number of missing values for each Feature df.isna().sum().to_frame().rename(columns={0: 'Count Missing Values'})
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Descriptive Statistics
# Descriptive Statistics Table df.describe().T # settings to display all columns pd.set_option("display.max_columns", None) # display the dataframe head df.sample(n=10) #CHECK ROWS THAT HAVE ANY MISSING VALUE IN ONE OF THE COLUMNS is_NaN = df.isnull() row_has_NaN = is_NaN.any(axis=1) rows_with_NaN = df[row_has_NaN] ro...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Data Preparation Data Transformation [Back to TOC](toc) __`Duplicates`__
# Checking if exist duplicated observations print(f'\033[1m' + "Number of duplicates: " + '\033[0m', df.duplicated().sum())
Number of duplicates:  0
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
__`Convert Date to correct format`__
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d') df
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
__`Get percentual difference between open and close values and low and high values`__
df['pctDiff_CloseOpen'] = abs((df[df.columns[2]]-df[df.columns[5]])/df[df.columns[2]])*100 df['pctDiff_HighLow'] = abs((df[df.columns[3]]-df[df.columns[4]])/df[df.columns[4]])*100 df.head() def plot_coinValue(df): #Get coin name coin_name = df.columns[2].split('-')[0] #Get date and coin value ...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Modelling Building LSTM Model [Back to TOC](toc) StrategyCreate a DF (windowed_df) where the middle columns will correspond to the close values of X days before the target date and the final column will correspond to the close value of the target date. Use these values for prediction and play with the value of X
def get_windowed_df(X, df): start_Date = df['Date'] + pd.Timedelta(days=X) perm = np.zeros((1,X+1)) #Get labels for DataFrame j=1 labels=[] while j <= X: label = 'closeValue_' + str(j) + 'daysBefore' labels.append(label) j+=1 labels.append('c...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Get Best Parameters for LSTM [Back to TOC](toc)
#!pip install tensorflow #import os #os.environ['PYTHONHASHSEED']= '0' #import numpy as np #np.random.seed(1) #import random as rn #rn.seed(1) #import tensorflow as tf #tf.random.set_seed(1) # #from tensorflow.keras.models import Sequential #from tensorflow.keras.optimizers import Adam #from tensorflow.keras import lay...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Run the LSTM Model and Get Predictions [Back to TOC](toc)
#BEST SOLUTION OF THE MODEL # Best MSE=3.813 # Optimal Batch Size: 1000 # Optimal Number of Epochs: 100 # Optimal Value of Learning Rate: 0.045 from tensorflow.keras.models import Sequential from tensorflow.keras.models import Sequential from tensorflow.keras.optimizers import Adam from tensorflow.keras import layers ...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
Recursive Predictions [Back to TOC](toc)
from copy import deepcopy #Get prediction for future dates recursively based on the previous existing information. Then update the window of days upon #which the predictions are made recursive_predictions = [] recursive_dates = np.concatenate([dates_test]) extra_dates = np.array(['2022-05-09', '2022-05-10', '2022-05...
_____no_output_____
MIT
BC4_crypto_forecasting/scripts_updated/ATOM_notebook.ipynb
rodrigomfguedes/business-cases-21-22
This notebook contains the code that uses spaceyMoji to tokenize our text. This gets around the nltk problem
import spacy import numpy as np from spacymoji import Emoji import pandas as pd import emoji import nltk from gensim.models import Word2Vec annSchiz1 = pd.read_csv('data/baseline/dataOut/annSchiz1.csv', encoding='utf-8') annSchiz2 = pd.read_csv('data/baseline/dataOut/annSchiz2.csv', encoding='utf-8') nonAnnSchizFile = ...
_____no_output_____
MIT
notebooks/09 spacymoji.ipynb
gregoryverghese/schizophrenia-twitter
Baseline
emojiDf = getEmojiDf(annSchiz1) emojiDf.to_csv('data/baseline/emoji/emSchiz1Em.csv') emojiDf['Classification'].value_counts() noEmojiDf = emojiDf.copy() noEmojiDf['Tweet'] = noEmojiDf['Tweet'].apply(lambda x: removeEmojis(x)) noEmojiDf.to_csv('data/baseline/emoji/Schiz1Em.csv') emojiDf = getEmojiDf(annSchiz2) emojiDf.t...
_____no_output_____
MIT
notebooks/09 spacymoji.ipynb
gregoryverghese/schizophrenia-twitter