markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can for example inspect the `Iteration` to discover what its iteration bounds are.
t_iter.limits
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
And as we keep going down through the IET, we can eventually reach the `Expression` wrapping the user-provided SymPy equation.
expr = t_iter.nodes[0].body[0].body[0].nodes[0].nodes[0].body[0] expr.view
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
Of course, there are mechanisms in place to, for example, find all `Expression`s in a given IET. The Devito compiler has a number of IET visitors, among which `FindNodes`, usable to retrieve all nodes of a particular type. So we easily can get all `Expression`s within `op` as follows
from devito.ir.iet import Expression, FindNodes exprs = FindNodes(Expression).visit(op) exprs[0].view
_____no_output_____
MIT
examples/compiler/03_iet-A.ipynb
millennial-geoscience/devito
#mount the drive from google.colab import drive drive.mount('/content/drive') import os folders=os.listdir("/content/drive/My Drive/Data train") print(folders) image_data=[] from keras.preprocessing import image from PIL import ImageFile ImageFile.LOAD_TRUNCATED_IMAGES = True for ix in folders: path=os.path.join("/c...
(8395, 224, 224, 3)
MIT
Major_Project_4.ipynb
allokkk/Diabetic-Detection
preparing y train
import pandas as pd filter_data=pd.read_csv("/content/drive/My Drive/file1.csv") print(filter_data.shape) filter_data.head(n=21) y_train=filter_data["level"] y_train.shape
_____no_output_____
MIT
Major_Project_4.ipynb
allokkk/Diabetic-Detection
one hot vector conversion
from keras.utils import np_utils y_train=np_utils.to_categorical(y_train) print(x_train.shape,y_train.shape) # creat resnet 50 model from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.optimizers import Adam from keras.layers import * from keras.models import Model from...
_____no_output_____
MIT
Major_Project_4.ipynb
allokkk/Diabetic-Detection
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Step 2: Train a machine learning model This is the notebook for step 2 of the codelab [**Build a handwritten digit classifier app with TensorFlow Lite**](https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/). Run in Google Colab View source on GitHub Import dependenc...
# Enable TensorFlow 2 try: # %tensorflow_version only exists in Colab. %tensorflow_version 2.x except Exception: pass from __future__ import absolute_import, division, print_function, unicode_literals # TensorFlow and tf.keras import tensorflow as tf from tensorflow import keras # Helper libraries import numpy...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Download and explore the MNIST datasetThe MNIST database contains 60,000 training images and 10,000 testing images of handwritten digits. We will use the dataset to train our digit classification model.Each image in the MNIST dataset is a 28x28 grayscale image containing a digit from 0 to 9, and a label identifying wh...
# Keras provides a handy API to download the MNIST dataset, and split them into # "train" dataset and "test" dataset. mnist = keras.datasets.mnist (train_images, train_labels), (test_images, test_labels) = mnist.load_data() # Normalize the input image so that each pixel value is between 0 to 1. train_images = train_ima...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Train a TensorFlow model to classify digit imagesNext, we use Keras API to build a TensorFlow model and train it on the MNIST "train" dataset. After training, our model will be able to classify the digit images.Our model takes **a 28px x 28px grayscale image** as an input, and outputs **a float array of length 10** re...
# Define the model architecture model = keras.Sequential([ keras.layers.InputLayer(input_shape=(28, 28)), keras.layers.Reshape(target_shape=(28, 28, 1)), keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu), keras.layers.MaxPooling2D(pool_size=(2, 2)), keras.layers.Flatten(), keras.lay...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Let's take a closer look at our model structure.
model.summary()
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
There is an extra dimension with **None** shape in every layer in our model, which is called the **batch dimension**. In machine learning, we usually process data in batches to improve throughput, so TensorFlow automatically add the dimension for you. Evaluate our modelWe run our digit classification model against our...
# Evaluate the model using all images in the test dataset. test_loss, test_acc = model.evaluate(test_images, test_labels) print('Test accuracy:', test_acc)
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Although our model is relatively simple, we were able to achieve good accuracy around 98% on new images that the model has never seen before. Let's visualize the result.
# A helper function that returns 'red'/'black' depending on if its two input # parameter matches or not. def get_label_color(val1, val2): if val1 == val2: return 'black' else: return 'red' # Predict the labels of digit images in our test dataset. predictions = model.predict(test_images) # As the model out...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Convert the Keras model to TensorFlow Lite Now as we have trained the digit classifer model, we will convert it to TensorFlow Lite format for mobile deployment.
# Convert Keras model to TF Lite format. converter = tf.lite.TFLiteConverter.from_keras_model(model) tflite_float_model = converter.convert() # Show model size in KBs. float_model_size = len(tflite_float_model) / 1024 print('Float model size = %dKBs.' % float_model_size)
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
As we will deploy our model to a mobile device, we want our model to be as small and as fast as possible. **Quantization** is a common technique often used in on-device machine learning to shrink ML models. Here we will use 8-bit number to approximate our 32-bit weights, which in turn shrinks the model size by a factor...
# Re-convert the model to TF Lite using quantization. converter.optimizations = [tf.lite.Optimize.DEFAULT] tflite_quantized_model = converter.convert() # Show model size in KBs. quantized_model_size = len(tflite_quantized_model) / 1024 print('Quantized model size = %dKBs,' % quantized_model_size) print('which is about...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Evaluate the TensorFlow Lite modelBy using quantization, we often traded off a bit of accuracy for the benefit of having a significantly smaller model. Let's calculate the accuracy drop of our quantized model.
# A helper function to evaluate the TF Lite model using "test" dataset. def evaluate_tflite_model(tflite_model): # Initialize TFLite interpreter using the model. interpreter = tf.lite.Interpreter(model_content=tflite_model) interpreter.allocate_tensors() input_tensor_index = interpreter.get_input_details()[0]["...
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Download the TensorFlow Lite modelLet's get our model and integrate it into an Android app.If you see an error when downloading mnist.tflite from Colab, try running this cell again.
# Save the quantized model to file to the Downloads directory f = open('mnist.tflite', "wb") f.write(tflite_quantized_model) f.close() # Download the digit classification model from google.colab import files files.download('mnist.tflite') print('`mnist.tflite` has been downloaded')
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
Good job!This is the end of *Step 2: Train a machine learning model* in the codelab **Build a handwritten digit classifier app with TensorFlow Lite**. Let's go back to our codelab and proceed to the [next step](https://codelabs.developers.google.com/codelabs/digit-classifier-tflite/2).
_____no_output_____
Apache-2.0
lite/codelabs/digit_classifier/ml/step2_train_ml_model.ipynb
WilliamHYZhang/examples
author = soup.find('span', class_='author').textauthor Ejercicio discutido en la reunion anterior
# Objetivo del ejercicio 2, limpiar el corpus del articulo lo mas posible, # por fa Utilicen otro articulo de pagina web/servidor web, es decir la misma de la teja. from nltk import tokenize from nltk import word_tokenize Corpus = soup <div id="article-content"> tok_corp = word_tokenize(Corpus) tok_corp
_____no_output_____
Apache-2.0
CaamanitoNoteBook.ipynb
sergiojim96/JupyterRepo
Codigo Extra
from nltk.tokenize import sent_tokenize, word_tokenize #basicas import pickle #Importante import pandas as pd #Importante from textblob import TextBlob #Importante lo vemos luego import gensim #Importante lo vemos luego import nltk #basicas import re #basicas import string #basicas #Otras from sqlalchemy import create...
_____no_output_____
Apache-2.0
CaamanitoNoteBook.ipynb
sergiojim96/JupyterRepo
Bibliotecas:
import numpy as np import matplotlib.pyplot as plt import matplotlib.patches as mpatches %matplotlib inline import glob
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
Fazer e testar uma função que recebe como entrada um array de anos e um de meses e retorna um array de anos decimais.
dados = np.loadtxt (arquivo, comments = '%') anos = dados [:, 0] meses = dados[:, 1] def anos_para_ano_decimal(anos , meses): assert type(anos) == np.ndarray, "Anos precisa ser um array" assert type(meses) == np.ndarray, "Meses presisa ser um array" ano_em_decimal = ((meses-1)/10 + anos) return (ano_em_...
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
Teste incorreto:
anos_para_ano_decimal(1,3)
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
Teste correto:
anos_para_ano_decimal(anos, meses)
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
Fazer e testar uma função que recebe como entrada uma matriz (array 2d) de dados de temperaturas e retorna os anos decimais, a anomalia anual, anomalia de 10 anos e sua respectiva incerteza.
def temp_para_outros(dados): ano_em_decimal_1 = anos_para_ano_decimal (anos, meses) anomalia_anual_1 = dados [:, 4] anomalia_10_anos_1 = dados [:, 8] unc_a10 = dados [:, 9] return (ano_em_decimal_1 , anomalia_anual_1 , anomalia_10_anos_1 , unc_a10) temp_para_outros (dados)
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
Usando as funções criadas acima para repetir a tarefa da prática Python 2.
arquivos = glob.glob("dados/*W-TAVG-Trend.txt") for arquivo in arquivos: print(arquivo) for arquivo in arquivos: dados = np.loadtxt (arquivo, comments='%') ano_em_decimal = anos_para_ano_decimal (anos = dados [:, 0], meses = dados [:, 1]) anomalia_anual = dados [:, 4] anomalia_10_anos = dados[:, 8]...
_____no_output_____
CC-BY-4.0
python3.ipynb
mat-esp-2016/python-3-amanda_joaovictor_rian
welter Issue 35: Figure of postage stamps of spectral features Part I: Try it out
import os import json import numpy as np import matplotlib.pyplot as plt import seaborn as sns %config InlineBackend.figure_format = 'retina'
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
Need to re-run these before making each plot.
ws = np.load("../sf/m087/output/mix_emcee/run01/emcee_chain.npy") burned = ws[:, -200:,:] xs, ys, zs = burned.shape fc = burned.reshape(xs*ys, zs) ff = 10**fc[:, 7]/(10**fc[:, 7]+10**fc[:, 5]) inds_sorted = np.argsort(ff) ff_sorted = ff[inds_sorted] fc_sorted = fc[inds_sorted]
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
Double check correlation plots as a sanity check for trends in $T_{\mathrm{eff}}$ and $f_\Omega$
sns.distplot(ff) plt.plot(fc_sorted[:,0]) plt.plot(fc_sorted[:,6]) plt.plot(fc_sorted[:,5]) plt.plot(fc_sorted[:,7]) #ax = sns.kdeplot(ff_sorted, fc_sorted[:,0], shade=True) #ax.plot(ff_sorted[400], fc_sorted[400,0], 'b*', ms=13) #ax.plot(ff_sorted[4000], fc_sorted[4000,0], 'k*', ms=13) #ax.plot(ff_sorted[7600], fc_sor...
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
Generate the data using the new `plot_specific_mix_model.py` This custom Starfish python script generates model spectra at 5, 50, and 95 percentiles of fill factor, and then saves them to a csv file named `models_ff-05_50_95.csv`.
import pandas as pd models = pd.read_csv('/Users/gully/GitHub/welter/sf/m087/output/mix_emcee/run01/models_ff-05_50_95.csv') #models.head()
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
This is a complex Matplotlib layout
lw =1.0 from matplotlib import gridspec from matplotlib.ticker import FuncFormatter from matplotlib.ticker import ScalarFormatter sns.set_context('paper') sns.set_style('ticks') sns.set_color_codes()
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
New version has no right panel
fig = plt.figure(figsize=(6.0, 3.0)) #fig.subplots_adjust(left=0.05, right=0.95, wspace=0.05) lc = 20430 wlb = 20430 wlr = 20470 fig = plt.figure(figsize=(6.0, 3.0)) gs = gridspec.GridSpec(1, 3) ax1 = fig.add_subplot(gs[0,0]) ax1.step(models.wl-lc, models.data.values, '-k', alpha=0.3) ax1.plot(models.wl-lc, models....
_____no_output_____
MIT
notebooks/welter_issue035-03_postage_stamps_of_spectral_lines_VI.ipynb
BrownDwarf/welter
ANÁLISIS DEL PBI PERUANO (USO DE API DEL BCRP) I. Introducción Practicamente todas las economías del mundo buscan crecer y esto se mide a través del PBI, el Perú no es la excepción. En este sentido, su comportamiento es importante. Desde ya, es fácil presuponer algunas características como la presencia de tendencia ...
#https://estadisticas.bcrp.gob.pe/estadisticas/series/mensuales/resultados/PN01288PM/html #PN01770AM : PBI #PN01142MM : índice bursátil #PN01288PM: índice de PM sin alimentos url_base="https://estadisticas.bcrp.gob.pe/estadisticas/series/api/" cod_ser="PN01770AM" #[códigos de series mensuales] formato="/json" per="/...
['Ene.2005', 'Feb.2005', 'Mar.2005', 'Abr.2005', 'May.2005', 'Jun.2005', 'Jul.2005', 'Ago.2005', 'Sep.2005', 'Oct.2005', 'Nov.2005', 'Dic.2005', 'Ene.2006', 'Feb.2006', 'Mar.2006', 'Abr.2006', 'May.2006', 'Jun.2006', 'Jul.2006', 'Ago.2006', 'Sep.2006', 'Oct.2006', 'Nov.2006', 'Dic.2006', 'Ene.2007', 'Feb.2007', 'Mar.20...
MIT
APIs/pbi.ipynb
abnercasallo/Casos-Aplicados-de-Python
III. Limpieza y creación de Data Frame
import pandas as pd diccionario= {"Fechas":fechas, "Valores":price_index} print(diccionario) df = pd.DataFrame(diccionario) #df.set_index(df['date'], inplace=True) #df=df.drop(columns=['date']) #df["Fechas"]=pd.to_datetime(df["Fechas"], infer_datetime_format=True) #Hay nulls? #df.isnull().sum() #no hay df import matp...
_____no_output_____
MIT
APIs/pbi.ipynb
abnercasallo/Casos-Aplicados-de-Python
Dependencies
import os, warnings, shutil import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt from transformers import AutoTokenizer from sklearn.model_selection import StratifiedKFold SEED = 0 warnings.filterwarnings("ignore")
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Parameters
MAX_LEN = 192 tokenizer_path = 'jplu/tf-xlm-roberta-large'
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Load data
train1 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-toxic-comment-train.csv") train2 = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/jigsaw-unintended-bias-train.csv") train2.toxic = train2.toxic.round().astype(int) train_df = pd.concat([train1[['co...
Train samples 435775
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Data generation sanity check
for idx in range(5): print('\nRow %d' % idx) max_seq_len = 22 comment_text = train_df['comment_text'].loc[idx] enc = tokenizer.encode_plus(comment_text, return_token_type_ids=False, pad_to_max_length=True, max_length=max_seq_len) print('comment_text : "%s"' % comment_text) print('inpu...
Row 0 comment_text : "Explanation Why the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27" input_ids : "[0, 5443, 586...
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
5-Fold split
folds = StratifiedKFold(n_splits=5, shuffle=True, random_state=SEED) for fold_n, (train_idx, val_idx) in enumerate(folds.split(train_df, train_df['toxic'])): print('Fold: %s, Train size: %s, Validation size %s' % (fold_n+1, len(train_idx), len(val_idx))) train_df[('fold_%s' % str(fold_n+1))] = 0 train_df[(...
Fold: 1, Train size: 348620, Validation size 87155 Fold: 2, Train size: 348620, Validation size 87155 Fold: 3, Train size: 348620, Validation size 87155 Fold: 4, Train size: 348620, Validation size 87155 Fold: 5, Train size: 348620, Validation size 87155
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Label distribution
for fold_n in range(folds.n_splits): fold_n += 1 fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(18, 6)) fig.suptitle('Fold %s' % fold_n, fontsize=22) sns.countplot(x="toxic", data=train_df[train_df[('fold_%s' % fold_n)] == 'train'], palette="GnBu_d", ax=ax1).set_title('Train') sns.countplot(x="to...
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Output 5-fold set
train_df.to_csv('5-fold.csv', index=False) display(train_df.head()) for fold_n in range(folds.n_splits): if fold_n < 3: fold_n += 1 base_path = 'fold_%d/' % fold_n # Create dir os.makedirs(base_path) x_train = tokenizer.batch_encode_plus(train_df[train_df[('fold_%s' % fold...
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Validation set
valid_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/validation.csv", usecols=['comment_text', 'toxic', 'lang']) display(valid_df.head()) x_valid = tokenizer.batch_encode_plus(valid_df['comment_text'].values, return_token_type_ids=False, ...
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
Test set
test_df = pd.read_csv("/kaggle/input/jigsaw-multilingual-toxic-comment-classification/test.csv", usecols=['content']) display(test_df.head()) x_test = tokenizer.batch_encode_plus(test_df['content'].values, return_token_type_ids=False, return_a...
_____no_output_____
MIT
Datasets/jigsaw-dataset-split-pb-roberta-large-192.ipynb
dimitreOliveira/Jigsaw-Multilingual-Toxic-Comment-Classification
16 PDEs: Solution with Time Stepping (Students) Heat EquationThe **heat equation** can be derived from Fourier's law and energy conservation (see the [lecture notes on the heat equation (PDF)](https://github.com/ASU-CompMethodsPhysics-PHY494/PHY494-resources/blob/master/16_PDEs/16_PDEs_LectureNotes_HeatEquation.pdf))$...
import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') def T_bar(x, t, T0, L, K=237, C=900, rho=2700, nmax=1000): T = np.zeros_like(x) eta = K / (C*rho) for n in range(1, nmax, 2): kn = n*np.pi/L T += 4*T0/(np.pi * n) * np.sin(kn*x) * np.exp(-kn*kn * et...
_____no_output_____
CC-BY-4.0
16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb
nachrisman/PHY494
Numerical solution: Leap frogDiscretize (finite difference):For the time domain we only have the initial values so we use a simple forward difference for the time derivative:$$\frac{\partial T(x,t)}{\partial t} \approx \frac{T(x, t+\Delta t) - T(x, t)}{\Delta t}$$ For the spatial derivative we have initially all value...
import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib notebook L_rod = 1. # m t_max = 3000. # s Dx = 0.02 # m Dt = 2 # s Nx = int(L_rod // Dx) Nt = int(t_max // Dt) Kappa = 237 # W/(m K) CHeat = 900 # J/K rho = 2700 # kg/m^3 T0 = 373 # K Tb = 273 # K raise...
_____no_output_____
CC-BY-4.0
16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb
nachrisman/PHY494
VisualizationVisualize (you can use the code as is). Note how we are making the plot use proper units by mutiplying with `Dt * step` and `Dx`.
X, Y = np.meshgrid(range(T_plot.shape[0]), range(T_plot.shape[1])) Z = T_plot[X, Y] fig = plt.figure() ax = fig.add_subplot(111, projection="3d") ax.plot_wireframe(X*Dt*step, Y*Dx, Z) ax.set_xlabel(r"time $t$ (s)") ax.set_ylabel(r"position $x$ (m)") ax.set_zlabel(r"temperature $T$ (K)") fig.tight_layout()
_____no_output_____
CC-BY-4.0
16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb
nachrisman/PHY494
Stability of the solution Empirical investigation of the stabilityInvestigate the solution for different values of `Dt` and `Dx`. Can you discern patters for stable/unstable solutions?Report `Dt`, `Dx`, and `eta`* for 3 stable solutions * for 3 unstable solutions Wrap your heat diffusion solver in a function so that i...
def calculate_T(L_rod=1, t_max=3000, Dx=0.02, Dt=2, T0=373, Tb=273, step=20): Nx = int(L_rod // Dx) Nt = int(t_max // Dt) Kappa = 237 # W/(m K) CHeat = 900 # J/K rho = 2700 # kg/m^3 raise NotImplementedError return T_plot def plot_T(T_plot, Dx, Dt, step): X, Y = n...
_____no_output_____
CC-BY-4.0
16_PDEs/.ipynb_checkpoints/16_PDEs-Students-checkpoint.ipynb
nachrisman/PHY494
Initial notebook for classifiers analysis and comparison
import os import sys import pandas as pd import numpy as np from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer from sklearn.model_selection import train_test_split from nltk.corpus import stopwords from sklearn.svm import LinearSVC import matplotlib from sklearn.metrics import accuracy_score f...
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Load data
post, thread=data_prepare.load_train_data() post_test, thread_test=data_prepare.load_test_data() label_map=data_prepare.load_label_map() label_map num=len(thread) train_data_to_clean=[] test_data_to_clean=[] post.head(3) thread.head(3)
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Basic cleaning and transforming to the bag of words representation
train_data_to_clean=data_prepare.get_all_text_data_from_posts(post, thread) test_data_to_clean=data_prepare.get_all_text_data_from_posts(post_test, thread_test) clean_train_data = [data_prepare.clean(s) for s in train_data_to_clean] clean_test_data = [data_prepare.clean(s) for s in test_data_to_clean] vectori...
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Get additional features
train_data_features=feature_extraction.get_features(post, thread, train_data_features) test_data_features=feature_extraction.get_features(post_test, thread_test, test_data_features) X_test=test_data_features X_train=train_data_features y_train=thread["thread_label_id"] #X_train, X_val, y_train, y_val = train_test_split...
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Models efficiency comparison, eventually it came down to RandomForest vs LinearSVC
models = [ RandomForestClassifier(n_estimators=120), #ClassifierChain(LogisticRegression()), #BinaryRelevance(GaussianNB()), LinearSVC(), #MultinomialNB(), #LogisticRegression() ] CV=4 cv_df = pd.DataFrame(index=range(CV * len(models))) entries = [] for model in models: model_name = model._...
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Looks like Random forest is more stable. I have a better comparison between these exact 2 models predictions in submission statistics notebook
forest = RandomForestClassifier(n_estimators = 110,max_depth=5) forest = forest.fit(X_train, y_train) train_predict = forest.predict(X_train) test_predict = forest.predict(X_val) from sklearn.metrics import accuracy_score acc = accuracy_score(y_val, test_predict) print("Accuracy on the training dataset: {:.2f}".fo...
Accuracy on val dataset: 79.63 Accuracy on train dataset: 99.67
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
**Overfitting is for sure the biggest issue with this dataset**
conf_mat = confusion_matrix(y_val, y_pred,labels=label_map["type_id"].values) fig, ax = plt.subplots(figsize=(13,13)) sns.heatmap(conf_mat, annot=True, fmt='d', xticklabels=label_map.index, yticklabels=label_map.index) plt.ylabel('Actual') plt.xlabel('Predicted') plt.show()
_____no_output_____
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
It's pretty clearly seen from confusion matrices above that closed-setup is the taughest class to predict
from sklearn.naive_bayes import MultinomialNB from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import TfidfTransformer nb = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()), ('clf', MultinomialNB()), ]) nb.fit(X_train, y_train) ...
9 closed-setup byor 16 other vengeful 27 byor bastard 42 closed-setup byor 46 closed-setup byor 49 other cybrid 51 supernatural bastard 55 closed-setup byor 68 other bastard 70 paranormal bastard 75 other bastard 93 paranormal byor 106 paranormal supernatural 111 other cybrid 112 other cybrid 113 other cybrid 116 paran...
MIT
notebooks/Anatoly's Mafia prediction.ipynb
Nikishul/Kaggle-NMA-Competition
Predict heart failure with Watson Machine Learning![alt text](https://www.cdc.gov/dhdsp/images/heart_failure.jpg "Heart failure")This notebook contains steps and code to create a predictive model to predict heart failure and then deploy that model to Watson Machine Learning so it can be used in an application. Learnin...
# IMPORTANT Follow the lab instructions to insert Spark Session Data Frame to get access to the data used in this notebook # Ensure the Spark Session Data Frame is named df_data # Add the .option('inferSchema','True')\ line after the option line from the inserted code. .option('inferSchema','True')\
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Explore the loaded data by using the following Apache® Spark DataFrame methods:* print schema* print top ten records* count all records
df_data.printSchema()
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
As you can see, the data contains ten fields. The HEARTFAILURE field is the one we would like to predict (label).
df_data.show() df_data.describe().show() df_data.count()
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
As you can see, the data set contains 10800 records. 3 Interactive Visualizations w/PixieDust
# To confirm you have the latest version of PixieDust on your system, run this cell !pip install pixiedust==1.1.2
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
If indicated by the installer, restart the kernel and rerun the notebook until here and continue with the workshop.
import pixiedust
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Simple visualization using bar chartsWith PixieDust display(), you can visually explore the loaded data using built-in charts, such as, bar charts, line charts, scatter plots, or maps.To explore a data set: choose the desired chart type from the drop down, configure chart options, configure display options.
display(df_data)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
4. Create an Apache® Spark machine learning modelIn this section you will learn how to prepare data, create and train an Apache® Spark machine learning model. 4.1: Prepare dataIn this subsection you will split your data into: train and test data sets.
split_data = df_data.randomSplit([0.8, 0.20], 24) train_data = split_data[0] test_data = split_data[1] print("Number of training records: " + str(train_data.count())) print("Number of testing records : " + str(test_data.count()))
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
As you can see our data has been successfully split into two data sets:* The train data set, which is the largest group, is used for training.* The test data set will be used for model evaluation and is used to test the assumptions of the model. 4.2: Create pipeline and train a modelIn this section you will create an A...
from pyspark.ml.feature import StringIndexer, IndexToString, VectorAssembler from pyspark.ml.classification import RandomForestClassifier from pyspark.ml.evaluation import MulticlassClassificationEvaluator from pyspark.ml import Pipeline, Model
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
In the following step, convert all the string fields to numeric ones by using the StringIndexer transformer.
stringIndexer_label = StringIndexer(inputCol="HEARTFAILURE", outputCol="label").fit(df_data) stringIndexer_sex = StringIndexer(inputCol="SEX", outputCol="SEX_IX") stringIndexer_famhist = StringIndexer(inputCol="FAMILYHISTORY", outputCol="FAMILYHISTORY_IX") stringIndexer_smoker = StringIndexer(inputCol="SMOKERLAST5YRS",...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
In the following step, create a feature vector by combining all features together.
vectorAssembler_features = VectorAssembler(inputCols=["AVGHEARTBEATSPERMIN","PALPITATIONSPERDAY","CHOLESTEROL","BMI","AGE","SEX_IX","FAMILYHISTORY_IX","SMOKERLAST5YRS_IX","EXERCISEMINPERWEEK"], outputCol="features")
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Next, define estimators you want to use for classification. Random Forest is used in the following example.
rf = RandomForestClassifier(labelCol="label", featuresCol="features")
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Finally, indexed labels back to original labels.
labelConverter = IndexToString(inputCol="prediction", outputCol="predictedLabel", labels=stringIndexer_label.labels) transform_df_pipeline = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features]) transformed_df = transform_df_pipeline.fit(df_data...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Let's build the pipeline now. A pipeline consists of transformers and an estimator.
pipeline_rf = Pipeline(stages=[stringIndexer_label, stringIndexer_sex, stringIndexer_famhist, stringIndexer_smoker, vectorAssembler_features, rf, labelConverter])
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Now, you can train your Random Forest model by using the previously defined **pipeline** and **training data**.
model_rf = pipeline_rf.fit(train_data)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
You can check your **model accuracy** now. To evaluate the model, use **test data**.
predictions = model_rf.transform(test_data) evaluatorRF = MulticlassClassificationEvaluator(labelCol="label", predictionCol="prediction", metricName="accuracy") accuracy = evaluatorRF.evaluate(predictions) print("Accuracy = %g" % accuracy) print("Test Error = %g" % (1.0 - accuracy))
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
You can tune your model now to achieve better accuracy. For simplicity of this example tuning section is omitted. 5. Persist modelIn this section you will learn how to store your pipeline and model in Watson Machine Learning repository by using Python client libraries.First, you must import client libraries.
from repository.mlrepositoryclient import MLRepositoryClient from repository.mlrepositoryartifact import MLRepositoryArtifact
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Authenticate to Watson Machine Learning service on IBM Cloud. **STOP here !!!!:** Put authentication information (username, password, and instance_id) from your instance of Watson Machine Learning service here.
#Specify your username, password, and instance_id credientials for Watson ML service_path = 'https://ibm-watson-ml.mybluemix.net' username = 'xxxxx' password = 'xxxxx' instance_id = 'xxxxx'
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
**Tip:** service_path, username, password, and instance_id can be found on Service Credentials tab of the Watson Machine Learning service instance created on the IBM Cloud.
ml_repository_client = MLRepositoryClient(service_path) ml_repository_client.authorize(username, password)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Create model artifact (abstraction layer).
pipeline_artifact = MLRepositoryArtifact(pipeline_rf, name="pipeline") model_artifact = MLRepositoryArtifact(model_rf, training_data=train_data, name="Heart Failure Prediction Model", pipeline_artifact=pipeline_artifact)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
**Tip:** The MLRepositoryArtifact method expects a trained model object, training data, and a model name. (It is this model name that is displayed by the Watson Machine Learning service). 5.1: Save pipeline and model¶In this subsection you will learn how to save pipeline and model artifacts to your Watson Machine Learn...
saved_model = ml_repository_client.models.save(model_artifact)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Get saved model metadata from Watson Machine Learning.**Tip:** Use *meta.availableProps* to get the list of available props.
saved_model.meta.available_props() print("modelType: " + saved_model.meta.prop("modelType")) print("trainingDataSchema: " + str(saved_model.meta.prop("trainingDataSchema"))) print("creationTime: " + str(saved_model.meta.prop("creationTime"))) print("modelVersionHref: " + saved_model.meta.prop("modelVersionHref")) print...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
5.2 Load model to verify that it was saved correctlyYou can load your model to make sure that it was saved correctly.
loadedModelArtifact = ml_repository_client.models.get(saved_model.uid)
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Print the model name to make sure that model artifact has been loaded correctly.
print(str(loadedModelArtifact.name))
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Congratulations, you've sucessfully created a predictive model and saved it in the Watson Machine Learning service. You can now switch to the Watson Machine Learning console to deploy the model and then test it in application, or continue within the notebook to deploy the model using the APIs.****** 6.0 Accessing Wat...
#Import Python WatsonML Repository SDK from repository.mlrepositoryclient import MLRepositoryClient from repository.mlrepositoryartifact import MLRepositoryArtifact #Authenticate ml_repository_client = MLRepositoryClient(service_path) ml_repository_client.authorize(username, password) #Deploy a new model. I renamed ...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
6.1 Get the Watson ML API TokenThe Watson ML API authenticates all requests through a token, start by requesting the token from our Watson ML Service.
import json import requests from base64 import b64encode token_url = service_path + "/v3/identity/token" # NOTE: for python 2.x, uncomment below, and comment out the next line of code: #userAndPass = b64encode(bytes(username + ':' + password)).decode("ascii") # Use below for python 3.x, comment below out for python 2...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
6.2 Preview currenly published models
model_url = service_path + "/v3/wml_instances/" + instance_id + "/published_models" headers = {'authorization': 'Bearer ' + watson_ml_token } response = requests.request("GET", model_url, headers=headers) published_models = json.loads(response.text) print(json.dumps(published_models, indent=2))
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Read the details of any returned models
print('{} model(s) are available in your Watson ML Service'.format(len(published_models['resources']))) for model in published_models['resources']: print('\t- name: {}'.format(model['entity']['name'])) print('\t model_id: {}'.format(model['metadata']['guid'])) print('\t deployments: {}'.format(m...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Create a new deployment of the Model
# Update this `model_id` with the model_id from model that you wish to deploy listed above. model_id = 'xxxx' deployment_url = service_path + "/v3/wml_instances/" + instance_id + "/published_models/" + model_id + "/deployments" payload = "{\"name\": \"Heart Failure Prediction Model Deployment\", \"description\": \"F...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Monitor the status of deployment
# Update this `deployment_id` from the newly deployed model from above. deployment_id = "xxxx" deployment_details_url = service_path + "/v3/wml_instances/" + instance_id + "/published_models/" + model_id + "/deployments/" + deployment_id headers = {'authorization': 'Bearer ' + watson_ml_token, 'content-type': "applic...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
6.3 Invoke prediction model deploymentDefine a method to call scoring url. Replace the **scoring_url** in the method below with the scoring_url returned from above.
def get_prediction_ml(ahb, ppd, chol, bmi, age, sex, fh, smoker, exercise_minutes ): scoring_url = 'xxxx' scoring_payload = { "fields":["AVGHEARTBEATSPERMIN","PALPITATIONSPERDAY","CHOLESTEROL","BMI","AGE","SEX","FAMILYHISTORY","SMOKERLAST5YRS","EXERCISEMINPERWEEK"],"values":[[ahb, ppd, chol, bmi, age, sex, fh, ...
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Call get_prediction_ml method exercising our prediction model
print('Is a 44 year old female that smokes with a low BMI at risk of Heart Failure?: {}'.format(get_prediction_ml(100,85,242,24,44,"F","Y","Y",125)))
_____no_output_____
Apache-2.0
notebooks/predictiveModel.ipynb
dzwietering/watson-dojo-pm-tester
Chapter 4: Minimum spanning trees In this chapter we will continue to study algorithms that process graphs. We will implement Kruskal's algorithm to construct the **minimum spanning tree** of a graph, a subgraph that efficiently connects all nodes. Trees in pythonA tree is an undirected graph where any two edges are ...
tree_dict = {'A' : set(['D']), 'B' : set(['D']), 'C' : set(['D']), 'D' : set(['A', 'B', 'C', 'E']), 'E' : set(['D', 'F']), 'F' : set(['E'])}
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
Though in this chapter, we prefer to represent the tree as a list (set) of links:
tree_links = [(node, neighbor) for node in tree_dict.keys() for neighbor in tree_dict[node]] tree_links
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
If we choose one node as the **root** of the tree, we have exactly one path from this root to each of the other terminal nodes. This idea can applied recursively as follows: from this root, each neighboring is itself a root of a subtree. Each of these subtrees also consist of a root and possibly one or more subtrees. H...
tree_list = ['D', ['A'], ['B'], ['C'], ['E', ['F']]]
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
Minimum spanning tree Suppose we have an undirected connected weighted graph $G$ as depicted below.![A weighted graph](Figures/graph.png) Weighted graphs can either be implemented as a set of weighted edges of as a dictionary.
vertices = ['A', 'B', 'C', 'D', 'E', 'F', 'G'] edges = set([(5, 'A', 'D'), (7, 'A', 'B'), (8, 'B', 'C'), (9, 'B', 'D'), (7, 'B', 'E'), (5, 'C', 'E'), (15, 'D', 'E'), (6, 'F', 'D'), (8, 'F', 'E'), (9, 'E', 'G'), (11, 'F', 'G')]) weighted_adj_list = {v : set([]) for v in vertices} for weight, vertex1,...
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
For example, the nodes may represent cities and the weight of an edge may represent the cost of implementing a communication line between two cities. If we want to make communication possible between all cities, these should be a path between any two cities. A **spanning tree** is a subgraph of $G$ that is a tree which...
from union_set_forest import USF animals = ['mouse', 'bat', 'robin', 'trout', 'seagull', 'hummingbird', 'salmon', 'goldfish', 'hippopotamus', 'whale', 'sparrow'] union_set_forest = USF(animals) # group mammals together union_set_forest.union('mouse', 'bat') union_set_forest.union('mouse', 'hippopotamus') u...
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
Kruskal's algorithm Kruskal's algorithm is a very simple algorithm to find the minimum spanning tree. The main idea is to start with an intial 'forest' of the induvidual nodes of the graph. In each step of the algorithm we add an edge with the smallest possible value that connects two disjoints trees in the forest. Th...
def kruskal(vertices, edges): """ Kruskal's algorithm for finding a minimum spanning tree Input : - vertices : a set of the vertices of the graph - edges : a list of weighted edges (e.g. (0.7, 'A', 'B')) for an edge from node A to node B with weigth 0.7 Output: ...
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
The travelling salesman problemThe traveling salesman problem is a well-known problem in computer science. The goal is to find a tour in a graph with a minimal cost. This problem is NP-hard, there is no algorithm to solve this efficiently for large graphs. The tour is represented as a dictionary, for each key-value pa...
def nearest_neighbour_tsa(graph, start): """ Nearest Neighbour heuristic for the travelling salesman problem Inputs: - graph: the graph as an adjacency list - start: the vertex to start Outputs: - tour: the tour as a dictionary - tour_cost: the cost of the t...
_____no_output_____
MIT
Chapters/Old/06.MinimumSpanningTrees/Chapter4.ipynb
MichielStock/SelectedTopicsOptimization
This notebook serves as both an introduction to Jupyter notebooks *and* a brief introduction to Python.Note that this portion is not a comprehensive discussion of the Python language. There are many books (with many 100's of pages) on the subject, and the goal here is to introduce you to some basic concepts that will...
# You can also describe what you are doing in code-- just start the line with "#" # These "comments" tend to be short, so more general descriptions or motivation should probably go in the "markdown" cells. # Below, I declare a variable x and set it equal to 5 x = 5 # Now, I can perform operations on x: y1 = x*4 y2 = ...
20 10 125
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching
Above, we declared an integer `x` and performed some operations. It appears trivial, but it covers a number of basic, but important, items.Any time we use the `=` sign, we are performing an "assignment" of a value to a variable. In our usual mathematical language, we might read `x=5` as "x equals 5". While that int...
# These are all the same and valid: x=5 x = 5 x =5
_____no_output_____
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching
All of the above are valid and equivalent. The space **after** `x` does not matter. The reason white-space matters at the start of the line is that Python uses the "leading" white space to create "code blocks". This whitespace can created either by spaces or with the "Tab" key. As long as you are consistent, it is O...
# This "import" gives us access to "out of the box" code that lets us generate random numbers import random # Generate 5 random integers between zero and 100: for i in range(5): x = random.randint(0,100) print(x) print('Done')
88 46 93 49 97
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching
Above, the `for` loop lets us do something repeatedly-- for the same amount of typing we can do this for 5 random integers, or 5 million. Each time, the indented code (only two lines) is executed; Python analyzes the "leading" space to determine the code blocks. If it helps, you can imagine a small arrow going line-b...
a_list = [0,5,7,4,1,2] for x in a_list: print('Look at ' + str(x)) if (x % 2) == 0: print(str(x) + ' is even.') else: print(str(x) + ' is odd.') print('...') print('Done with loop.')
Look at 0 0 is even. ... Look at 5 5 is odd. ... Look at 7 7 is odd. ... Look at 4 4 is even. ... Look at 1 1 is odd. ... Look at 2 2 is even. ... Done with loop.
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching
Some notes about the code above:- We declare a **list** by putting items inside the square brackets `[...]`. Lists can be anything, even mixing "types". For instance, ```a_list = [1, 'a', 2.3, 'b']```is valid Python. This list mixed integers, "strings" (letters/words), and "floats" (non-integer numbers).- The `for`...
ensg_to_genes = { 'ENSG00000141510': 'TP53', 'ENSG00000134323': 'MYCN', 'ENSG00000171094': 'ALK' } # the 'key' can reference anything-- below it points at a list of genes in a hypothetical pathway pathways = { 'pathway_A': ['TP53', 'BCL2L12', 'MTOR'], 'pathway_X': ['MYCN', 'PPARG', 'EGFR'] }
_____no_output_____
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching
Each "key" (which is unique!) points at a "value"; you will also see these called "key-value pairs". In the first dictionary (`ensg_to_genes`), the unique "ENSG" IDs maps to the common gene name (a string). In the second dictionary (`pathways`), the unique pathway names point at a list of strings. The "keys" can be ...
ensg_to_genes['ENSG00000134323']
_____no_output_____
MIT
Python_DS_course/intro.ipynb
yaoyu-e-wang/teaching