markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Birth weightsLet's look at the distribution of birth weights again.
import first live, firsts, others = first.MakeFrames()
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Based on KDE, it looks like the distribution is skewed to the left.
birth_weights = live.totalwgt_lb.dropna() pdf = thinkstats2.EstimatedPdf(birth_weights) thinkplot.Pdf(pdf, label='birth weight') thinkplot.Config(xlabel='Birth weight (pounds)', ylabel='PDF')
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The mean is less than the median, which is consistent with left skew.
Mean(birth_weights), Median(birth_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
And both ways of computing skew are negative, which is consistent with left skew.
Skewness(birth_weights), PearsonMedianSkewness(birth_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Adult weightsNow let's look at adult weights from the BRFSS. The distribution looks skewed to the right.
adult_weights = df.wtkg2.dropna() pdf = thinkstats2.EstimatedPdf(adult_weights) thinkplot.Pdf(pdf, label='Adult weight') thinkplot.Config(xlabel='Adult weight (kg)', ylabel='PDF')
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
The mean is greater than the median, which is consistent with skew to the right.
Mean(adult_weights), Median(adult_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
And both ways of computing skewness are positive.
Skewness(adult_weights), PearsonMedianSkewness(adult_weights)
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Exercises The distribution of income is famously skewed to the right. In this exercise, we’ll measure how strong that skew is.The Current Population Survey (CPS) is a joint effort of the Bureau of Labor Statistics and the Census Bureau to study income and related variables. Data collected in 2013 is available from htt...
def InterpolateSample(df, log_upper=6.0): """Makes a sample of log10 household income. Assumes that log10 income is uniform in each range. df: DataFrame with columns income and freq log_upper: log10 of the assumed upper bound for the highest range returns: NumPy array of log10 household income ...
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
Compute the median, mean, skewness and Pearson’s skewness of the resulting sample. What fraction of households report a taxable income below the mean? How do the results depend on the assumed upper bound?
# Solution Mean(sample), Median(sample) # Solution Skewness(sample), PearsonMedianSkewness(sample) # Solution # About 66% of the population makes less than the mean cdf.Prob(Mean(sample))
_____no_output_____
MIT
DSC 530 - Data Exploration and Analysis/ThinkStats2/solutions/chap06soln.ipynb
Hakuna-Patata/BU_MSDS_PTW
교차 검증 학습 모델을 학습할 때, 검증을 위해 우리는 train data와 validation data를 분리합니다. 이 경우, validation data는 모델의 학습에 영향을 미치지 않습니다. 따라서 모델이 학습하는 data의 수가 줄어들고, train data에 overfitting 됩니다. 이를 해결하기 위해 train data와 validation data를 나누는 과정을 여러번 반복하고 다양한 데이터셋을 사용하여 모델을 학습하는 방법을 cross validation (교차 검증 학습) 이라고 합니다. cross validation...
import random from tqdm.notebook import tqdm, tnrange import os import numpy as np import pandas as pd from transformers import AutoTokenizer, AutoModelForSequenceClassification from transformers import AdamW from transformers import get_linear_schedule_with_warmup from sklearn.model_selection import train_test_split...
사용가능한 GPU수 : 1
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
Reproduction을 위한 Seed 고정 출처 : https://dacon.io/codeshare/2363?dtype=vote&s_id=0
RANDOM_SEED = 42 def seed_everything(seed: int = 42): random.seed(seed) np.random.seed(seed) os.environ["PYTHONHASHSEED"] = str(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) # type: ignore torch.backends.cudnn.deterministic = True # type: ignore torch.backends.cudnn.benchmark...
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
sklearn의 StratifiedKFold를 불러오고 예측한 데이터를 저장할 수 있는 변수를 만듭니다. `StratifiedKFold()`에서 `n_split=5`는 5개의 train data와 validation data를 만들겠다는 이야기입니다.
NUM_TEST_DATA = len(test) skf = StratifiedKFold(n_splits=5) final_test_pred = np.zeros([NUM_TEST_DATA,7])
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
parameter들을 정의합니다.
lr = 2e-5 adam_epsilon = 1e-8 epochs = 3 num_warmup_steps = 0 num_labels = 7
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
`train()`, `evaluate()`, `predict()`를 정의합니다.
def train(model,train_dataloader): train_loss_set = [] learning_rate = [] batch_loss = 0 for step, batch in enumerate(tqdm(train_dataloader)): model.train() batch = tuple(t.to(device) for t in batch) b_input_ids, b_token_type_ids, b_input_mask, b_labels = batch outputs...
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
`StratifiedKFold()`의 `split()`함수를 사용하면 인자로 주어진 데이터를 train data와 validation data로 나눈 index를 돌려줍니다. DataFrame에 index를 사용하여 train data와 validation data를 나눌 수 있습니다. 나눠진 데이터로 학습과 평가를 진행한 뒤 test data를 예측합니다. 예측한 데이터는 최종 예측 데이터(`final_test_pred`)에 합쳐집니다. 총 학습에 걸리는 시간은 한번 학습하는데 걸리는 시간 * `n_splits`로 넘겨준 수 ( 여기서는 5 )입니다...
for train_idx, validation_idx in skf.split(dataset["title"],dataset["topic_idx"]): dataset_train = pd.DataFrame() dataset_val = pd.DataFrame() dataset_train["title"] = dataset["title"][train_idx] dataset_train["topic_idx"] = dataset["topic_idx"][train_idx] dataset_val["title"] = datas...
Some weights of the model checkpoint at klue/bert-base were not used when initializing BertForSequenceClassification: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.predictions.decoder.bias', 'cls.pre...
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
5번의 교차 학습동안 서로 다른 train, validation data를 통해 학습한 model이 예측한 값은 `final_test_pred`에 더해져 있습니다. 이 예측값을 `argmax`하여 최종 예측값을 만들어냅니다.
final_test_pred[:10] len(final_test_pred) total_pred = np.argmax(final_test_pred,axis = 1) total_pred[:10] submission = pd.read_csv('data/sample_submission.csv') submission['topic_idx'] = total_pred submission.to_csv("results/klue-bert-base-kfold5.csv",index=False)
_____no_output_____
MIT
9.cross-validation.ipynb
qkrwjdan/dacon_news_topic_clasiification
Exercise 3 **Please Note**: We updated the requirements.txtPlease install the new requirements before editing this exercise. Import packages
import os from vll.utils.download import download_mnist import numpy as np import matplotlib.pyplot as plt import skimage import skimage.io import torch import torch.nn.functional as F from torchvision import transforms from models.mnist.simple_cnn import Net
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 1(2 points)In this task, you will learn some basic tensor operations using the PyTorch library.Reference for torch: https://pytorch.org/docs/stable/torch.html
# Create a numpy array that looks like this: [0, 1, 2, ..., 19] arr = # Convert the numpy array to a torch tensor tensor = print(tensor) # Create a tensor that contains random numbers. # It should have the same size like the numpy array. # Multiply it with the previous tensor. rand_tensor = tensor = print(tensor)...
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 2(2 points)Use Autograd to perform operations on a tensor and output then gradients.
# Create a random 2x2 tensor which requires gradients x = print(x) # Create another tensor by adding 2.0 y = print(y) # Create a third tensor z = y^2 z = print(z) # Compute out as the mean of values in z out = print(out) # Perform back propagation on out # Print the gradients dout/dx # Create a copy of y wh...
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 3(3 points)Implement a Dataset class for MNIST.
# We first download the MNIST dataset download_mnist() class MNIST: """ Dataset class for MNIST """ def __init__(self, root, transform=None): """ root -- path to either "training" or "testing" transform -- transform (from torchvision.transforms) to ...
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
Task 4(3 points)You can now load a pretrained neural network model we provide.Your last task is to run the model on the MNIST test dataset, plot some example images with the predicted labels and compute the prediction accuracy.
def validate(model, data_loader): # TODO: Create a 10x10 grid of subplots model.eval() correct = 0 # count for correct predictions with torch.no_grad(): for i, item in enumerate(data_loader): # TODO: unpack item into image and ground truth # and run ne...
_____no_output_____
MIT
5.0-tl-pytorch.ipynb
titus-leistner/3dcv-students
选择 布尔类型、数值和表达式![](../Photo/33.png)- 注意:比较运算符的相等是两个等号,一个等到代表赋值- 在Python中可以用整型0来代表False,其他数字来代表True- 后面还会讲到 is 在判断语句中的用发 字符串的比较使用ASCII值 Markdown - https://github.com/younghz/Markdown EP:- - 输入一个数字,判断其实奇数还是偶数
#除了bool(0)是false以外,其他数全是true #bool(0) 执行时也是false #if bool(1-1): # print(yes) #else: # print(no) #结果是打印 no b1=bool(4) print(b1) i=3 if i==5: print('i=5') else: print("i!=5") i=eval(input("输入i" )) if i==5: print('i=5') else: print("i!=5")
输入i3 i!=5
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
产生随机数字- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数 产生一个随机数,你去输入,如果你输入的数大于随机数,那么就告诉你太大了,反之,太小了,然后你一直输入,知道它满意为止
import random a=random.randint(1,100) while 1: b=eval(input("比较数")) if a>b: print("太小了") if a<b: print("太大了") if a==b: print("yes") break
比较数50 太小了 比较数60 太小了 比较数70 太小了 比较数80 太小了 比较数90 太小了 比较数95 yes
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
其他random方法- random.random 返回0.0到1.0之间前闭后开区间的随机浮点- random.randrange(a,b) 前闭后开 EP:- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确- 进阶:写一个随机序号点名程序
import random a=random.randint(1,10) b=random.randint(1,10) c=a+b number=0 while number<5: d=eval(input("和为?")) if c>d: print("太小了") if c<d: print("太大了") if c==d: print("yes") break number +=1 import random a=random.randint(1,10) b=random.randint(1,10) c=a+b for i in...
_____no_output_____
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
if语句- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句- Python有很多选择语句:> - 单向if - 双向if-else - 嵌套if - 多向if-elif-else - 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进- 切记不可tab键和space混用,单用tab 或者 space- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐 EP:- 用户输入一个数字,判断其实奇数还是偶数- 进阶:可以查看下4.5实例研究猜生日 双向if-else 语句- 如果条件为真,那么走if内部语句,否则走else内部语句
a=eval(input("数字")) if a>2: if a%2==0: print("大于二的偶数") else: print("大于二的奇数") else: print("不大于二") a=input("有钱吗?") a1="有钱" b1="帅" c1="没有" if a==a1: #字符串可以直接比较,不需要定义变量 b=input("帅不帅") if b==b1: print("有没有老婆") c=input("") if c==c1: print("见一面") ...
有钱吗?有钱 帅不帅帅 有没有老婆 有 滚
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
EP:- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误 嵌套if 和多向if-elif-else![](../Photo/35.png)
#出现一次elif,就要出现一次if #有点相似于else不能单独出现 a=input("有钱吗?") if a=="有": b=input("帅不帅 ") elif b=="不帅": c=input("有老婆吗 ") elif c=="没有": print("结婚") else: print("滚")
有钱吗?有 帅不帅 不帅
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
EP:- 提示用户输入一个年份,然后显示表示这一年的动物![](../Photo/36.png)- 计算身体质量指数的程序- BMI = 以千克为单位的体重除以以米为单位的身高的平方![](../Photo/37.png)
#多行同时输入 按住ALT 等鼠标变加号,下拉被选中的行,同时编写 year=eval(input("请输入年份")) if year%12==0: print("猴") elif year%12==1: print("鸡") elif year%12==2: print("狗") elif year%12==3: print("猪") elif year%12==4: print("鼠") elif year%12==5: print("牛") elif year%12==6: print("虎") elif year%12==7: print("兔") elif ...
请输入身高1.69 请输入体重47 超轻
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
逻辑运算符![](../Photo/38.png) ![](../Photo/39.png)![](../Photo/40.png) EP:- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年- 提示用户输入一个年份,并返回是否是闰年- 提示用户输入一个数字,判断其是否为水仙花数
year=eval(input("请输入年份")) if (year%100!=0) and (year%4==0): print("是闰年") if year%400==0: print("是闰年") else: print("是平年") shu=eval(input("请输入一个数")) bai=shu//100 shi=shu//10 shi1=shi%10 ge=shu%10 a=bai/bai b=shi1/shi1 #已经知道是三位数了,不需要判断 #c=ge/ge #d=a+b+c #e=bai**d+shi1**d+ge**d e=bai**3+shi1**3+ge**3 if e==...
153 370 371 407
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
实例研究:彩票![](../Photo/41.png) Homework- 1![](../Photo/42.png)
import math a=eval(input("a")) b=eval(input("b")) c=eval(input("c")) pan=b**2-4*a*c if pan>0: print("两个根") elif pan<0: print("没有根") else: print("有一个根")
a1 b2 c3 没有根
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 2![](../Photo/43.png)
import random a=random.randint(1,100) b=random.randint(1,100) c=a+b d=eval(input("和为?")) if c==d: print("真") else: print("假")
_____no_output_____
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 3![](../Photo/44.png)
x=eval(input("今天是星期几?")) jth=eval(input("你想算几天以后")) c=(x+jth)%7 if c==0: print("今天是星期日") else: print("今天是星期",c)
今天是星期几?5 你想算几天以后7 今天是星期 5
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 4![](../Photo/45.png)
i=eval(input("请输入一个整数")) c=eval(input("请输入一个整数")) k=eval(input("请输入一个整数")) list1=[i,c,k] list1.sort() print(list1)
请输入一个整数5 请输入一个整数1 请输入一个整数9 [1, 5, 9]
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 5![](../Photo/46.png)
w1=eval(input("请输入包装")) m1=eval(input("请输入重量")) w2=eval(input("请输入包装")) m2=eval(input("请输入重量")) b1=w1*m1 b2=w2*m2 if b1>b2: print("b2更合适") else : print("b1更合适")
请输入包装50 请输入重量24.59 请输入包装25 请输入重量11.99 b2更合适
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 6![](../Photo/47.png)
mo1=eval(input("请输入月")) year1=eval(input("请输入年")) if (year1%100!=0) and (year1%4==0) and year1%400==0: if mo1==2: print(year1,"年",mo1,"月份","有29天") else: if mo1==1: print(year1,"年",mo1,"月份","有31天") elif mo1==2: print(year1,"年",mo1,"月份","有28天") elif mo1==3: print(year1,"...
请输入月2 请输入年2001 2001 年 2 月份 有28天
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 7![](../Photo/48.png)
import random yingbi=random.randint(1,2) cai=eval(input("你猜猜")) if yingbi==cai: print("正确") else: print("错误")
你猜猜2 错误
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 8![](../Photo/49.png)
import random dian_nao=random.randint(0,2) ren=eval(input("你要出什么?"+"石头=0 剪刀=2 布=1 ")) print(dian_nao) if ren==dian_nao: print("平局") else: if ren==0 and dian_nao==2: print("赢了") elif ren==2 and dian_nao==0: print("输了") elif ren>dian_nao: print("赢了") else: print("输了")
你要出什么?石头=0 剪刀=2 布=1 0 1 输了
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 9![](../Photo/50.png)
import math year=eval(input("请输入年")) m=eval(input("请输入月")) q=eval(input("请输入日")) if m==1: m=13 year=year-1 if m==2: m=14 year=year-1 h=(q+int(26*(m+1)/10)+int(year%100)+int(year%100/4)+int(year/100/4)+int(5*year/100))%7 if h==0: print("今天是星期六") if h==1: print("今天是星期日") if h==2: print("今天是星期一...
3
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 10![](../Photo/51.png)
import random hua=random.randint(1,4) daxiao=random.randint(1,13) if hua==1: hua="红桃" elif hua==2: hua="梅花" elif hua==3: hua="方块" elif hua==4: hua="黑桃" if daxiao==1: daxiao="Ace" elif daxiao==11: daxiao="Jack" elif daxiao==12: daxiao="Queen" elif daxiao==13: daxiao="King" print("这张牌是 ",h...
这张牌是 方块 King
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 11![](../Photo/52.png)
shu11=eval(input("请输入一个数")) bai=shu11//100 shi=shu11//10%10 ge=shu11%10 if bai==ge: print(shu11,"是回文数") else: print("不是回文数")
请输入一个数123 不是回文数
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
- 12![](../Photo/53.png)
bian1=eval(input("请输入第一条边的边长")) bian2=eval(input("请输入第二条边的边长")) bian3=eval(input("请输入第三条边的边长")) if bian1+bian2>bian3 and abs(bian1-bian2)<bian3: print("合理") else: print("不合理") bian1=eval(input("请输入第一条边的边长")) bian2=eval(input("请输入第二条边的边长")) bian3=eval(input("请输入第三条边的边长")) qing3=bian1+bian2 qing2=bian1+bian3 qing...
请输入第一条边的边长1 请输入第二条边的边长1 请输入第三条边的边长9 不合理
Apache-2.0
7.18.ipynb
Y-kiwi77/python1
Part I. ETL Pipeline for Pre-Processing the Files PLEASE RUN THE FOLLOWING CODE FOR PRE-PROCESSING THE FILES Import Python packages
# Import Python packages import pandas as pd import cassandra import re import os import glob import numpy as np import json import csv
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Creating list of filepaths to process original event csv data files
# checking your current working directory print(os.getcwd()) # Get your current folder and subfolder event data filepath = os.getcwd() + '/event_data' # Create a for loop to create a list of files and collect each filepath for root, dirs, files in os.walk(filepath): # join the file path and roots with the subdir...
/home/workspace
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Processing the files to create the data file csv that will be used for Apache Casssandra tables
# initiating an empty list of rows that will be generated from each file full_data_rows_list = [] # for every filepath in the file path list for f in file_path_list: # reading csv file with open(f, 'r', encoding = 'utf8', newline='') as csvfile: # creating a csv reader object csvreader = ...
6821
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Part II. Complete the Apache Cassandra coding portion of your project. Now you are ready to work with the CSV file titled event_datafile_new.csv, located within the Workspace directory. The event_datafile_new.csv contains the following columns: - artist - firstName of user- gender of user- item number in session- la...
# This should make a connection to a Cassandra instance your local machine # (127.0.0.1) from cassandra.cluster import Cluster cluster = Cluster() # To establish connection and begin executing queries, need a session session = cluster.connect()
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Create Keyspace
# TO-DO: Create a Keyspace try: session.execute(""" CREATE KEYSPACE IF NOT EXISTS udacity WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }""" ) except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Set Keyspace
# TO-DO: Set KEYSPACE to the keyspace specified above try: session.set_keyspace('udacity') except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Now we need to create tables to run the following queries. Remember, with Apache Cassandra you model the database tables on the queries you want to run. Create queries to ask the following three questions of the data 1. Give me the artist, song title and song's length in the music app history that was heard during s...
# Creating table for query "sessionId = 338, and itemInSession = 4" create_table_query = """ CREATE TABLE IF NOT EXISTS session_library ( session_id INT, item INT, ...
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Do a SELECT to verify that the data have been inserted into each table
query = """ SELECT artist, song_title, song_length FROM session_library WHERE session_id = %s AND item = %s """ try: rows = session.execute(query, (338, 4)) except Exception as e: print(e) for row in rows: ...
Artist: Faithless , Song: Music Matters (Mark Knight Dub) , Song length: 495.30731201171875
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
COPY AND REPEAT THE ABOVE THREE CELLS FOR EACH OF THE THREE QUESTIONS
# Creating table for query "userid = 10, sessionid = 182" sorted by item create_table_query = """ CREATE TABLE IF NOT EXISTS user_library ( user_id TEXT, session_id INT, ...
First Name: Jacqueline , Last Name: Lynch First Name: Tegan , Last Name: Levine First Name: Sara , Last Name: Johnson
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Drop the tables before closing out the sessions
## TO-DO: Drop the table before closing out the sessions try: session.execute("DROP TABLE IF EXISTS session_library") session.execute("DROP TABLE IF EXISTS user_library") session.execute("DROP TABLE IF EXISTS song_library") except Exception as e: print(e)
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Close the session and cluster connection¶
session.shutdown() cluster.shutdown()
_____no_output_____
MIT
Data Modeling with Cassandra/Project_1B_ Project_Template.ipynb
fernandofsilva/Data_Engineering-Udacity
Module 2.3: Working with LSTMs in Keras (A Review)We turn to implementing a type of recurrent neural network know as LSTM in the Keras functional API. In this module we will pay attention to:1. Using the Keras functional API for defining models.2. Mounting your Google drive to your Colab environment for file interface...
import sys import numpy from google.colab import drive from keras.models import Sequential from keras import Model from keras.optimizers import Adadelta from keras.layers import Dense,Dropout,LSTM,Input from keras.callbacks import ModelCheckpoint from keras.utils import np_utils
Using TensorFlow backend.
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We will have a little fun and try to teach a neural network to write like Lewis Carroll, the author of Alice in Wonderland.Note, though, that the same technique can be used to model any sequential system, and generate simulations from seeds for such a system. Here the sequence are the characters written by Carroll duri...
# Note: You will need to mount your drive every time you # run code in this tutorial. drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&scope=email%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdocs.test%20https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive%2...
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we can load the file using code and prepare the data. We want to work with sequences of 100 characters as input data, and our target will be the next (101st) character.To keep things simple, we will ignore upper/lower case character distinctions, and cast all alphabetical characters to lower case. To allow our mode...
def load_alice ( rawTextFile="/content/drive/My Drive/Mastering Keras Datasets/Alice.txt" ): # load ascii text and covert to lowercase raw_text = open(rawTextFile, encoding='utf-8').read() raw_text = raw_text.lower() # create mapping of unique chars to integers chars = sorted(list(set(raw...
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now lets load the data. X and Y are the input and target label datasets we will use in training. X_ is the un-reshaped X data for use later.
X,Y,X_,n_vocab,int_to_char = load_alice()
Total Characters: 163810 Total Vocab: 58 Total Patterns: 163710
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
You can play around below to look at the shape of the resulting X and Y arrays, as well as their contents. But they are no longer understandable character strings.
# Play around here to look at data characteristics
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we define our LSTM using the Keras function API. We are going to make use of LSTM layers, and add a dropout layer for regularization.We will pass the data to the model defining function so that we can read input and output dimensions of it, rather than hard coding them.For comparison, a second version of the functi...
def get_model (X,Y): # define the LSTM model inputs=Input(shape=(X.shape[1],X.shape[2]),name="Input") lstm1=LSTM(256, input_shape=(100,1),return_sequences=True)(inputs) drop1=Dropout(0.2)(lstm1) lstm2=LSTM(256)(drop1) drop2=Dropout(0.2)(lstm2) outputs=Dense(Y.shape[1], activation='softmax')(...
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We get our model.
model=get_model(X,Y)
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we will define an optimizer and compile it. If you are unfamiliar with the different types of optimizers available in keras, I suggest you read the keras documentation [here](https://keras.io/optimizers/) and play around training the model with different alternatives.
opt=Adadelta()
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
And we compile our model with the optimizer ready for training. We use categorical crossentropy as our loss function as this is a good default choice for working with a multi-class categorical target variable (i.e. the next character labels).
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we will make a function to fit the model. We will not do this very professionally (it is just a fun project), and so will not use any validation data. Rather, we will just run the training for a number of epoches - by default 20, though you can change this.We will, though, use a ModelCheckpoint callback to save the...
def fit_model (model,X,Y,epochs=100): # define the checkpoint callback filepath="/content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min') callbacks_list ...
_____no_output_____
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
We would then fit (train) the model by calling the above function.*It is not expected that you train the network using this function - see below to load trained weights from your google drive.*
model=fit_model(model,X,Y,100)
Epoch 1/100 163710/163710 [==============================] - 3246s 20ms/step - loss: 3.0840 - acc: 0.1663 Epoch 00001: loss improved from inf to 3.08398, saving model to /content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Here we will load saved weights. You can use the "alice_best_weights.hdf5" file that comes with the course - just place it in the same folder as the "alice.txt" file in your google drive. This file has been trained for 200 epoches, and gets a loss around 1.16.If you train the network yourself, the best weights will be ...
model=get_model(X,Y) filepath="/content/drive/My Drive/Mastering Keras Datasets/alice_best_weights.hdf5" model.load_weights(filepath)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_grap...
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Now we can see if our network has mastered the art of writing like Lewis Carroll! Let's write a function to let us see, and then call it.
def write_like_Lewis_Carroll(model,X_,n_vocab,int_to_char): # pick a random seed... start = numpy.random.randint(0, len(X_)-1) # ... in order to decide which X datum to use to start pattern = X_[start] print ("Seed:") print ("\"", ''.join([int_to_char[value] for value in pattern]), "\"") # generate chara...
Seed: " for it to speak with. alice waited till the eyes appeared, and then nodded. 'it's no use speaking t " o see the mock turtle shat ' 'i should hiv tereat ' thought alice, 'i must be giederen seams to be a bonk,' she said to herself, 'it would be of very curious to onow what there was a sery dortut, and the oora...
MIT
Module 2.3_ LSTMs.ipynb
PacktPublishing/-Mastering-Keras
Model Training - Basic ModelIn this Notebook, we will go through building a basic PyTorch Model for Training, and training it to get results on our dataset. ImportsIn this project, we will be using PyTorch for Deep Learning. NLP Pre-Processing, however, will be done using Keras's modules, because I prefer the impleme...
import pandas as pd; import numpy as np; import torch; from torch import nn; from torch.utils.data import Dataset, DataLoader; import torch.nn.functional as F; from sklearn.metrics import accuracy_score, f1_score, recall_score, precision_score; import math; from numpy import save, load; import keras_sequence_preproce...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Word EmbeddingsWe have 2 different types of Word Embeddings we will try in this application: Glove and FastText. To use the specific embedding, run that cell and not the other, as both are loaded in with the same formatting.
embed_size = 300; # GLOVE Embeddings embeddings_dict = {}; with open('../Embeddings/glove.6B/glove.6B.%dd.txt'%(embed_size), 'rb') as f: for line in f: values = line.split() word = values[0] vector = np.asarray(values[1:], "float32") embeddings_dict[word] = vector # FASTTEXT Embeddi...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We build a Word Index from the embeddings. To quickly do this, we will simply be iterating over the dataset and assigning an integer value to each word.
word_index = {}; token_num = 0; for row in quora_train_text[['cleaned_text', 'target']].iterrows(): text, label = row[1] tokens = [token for token in text.split(' ')]; for token in tokens: if token not in word_index: word_index[token] = token_num; token_num = token...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Next, we encode the individual sentences into sequences of integers from the word index. Than Pad them to fixed lengths using post-sequence-padding.
def encode_sentences(sentence, word_index=word_index, max_words=MAX_WORDS): output = []; for token in sentence.split(' '): if (token in word_index) and (word_index[token] < max_words): output.append(word_index[token]); return output; encoded_sentences = [encode_sentences(sent) for sent i...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
To do training / testing, we will divide the dataset into proper Training and Validation. 85% of the dataset for training, and the remaining 15% fo validation.
val_split = int(0.85 * len(quora_train_text)); train_ds = padded_sequences[:val_split]; val_ds = padded_sequences[val_split:]; train_y = quora_train_text.iloc[:val_split]['target'].values; val_y = quora_train_text.iloc[val_split:]['target'].values; train_lens = encoded_lengths[:val_split]; val_lens = encoded_lengths...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We build an Embeddings Matrix. Each row in the matrix is a vector from Glove / Fasttext.
vocab_size = min(MAX_WORDS, len(word_index))+1; embeddings_matrix = np.zeros((vocab_size, embed_size)); for word, posit in word_index.items(): if posit >= vocab_size: break; vec = embeddings_dict.get(word); if vec is None: vec = np.random.sample(embed_size); embeddings_dict...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Build a Data Loader to iterate over during the training process in a fixed batch size:
class QuoraDataset(Dataset): def __init__(self, encoded_sentences, labels, lengths): self.encoded_sentences = encoded_sentences; self.labels = labels; self.lengths = lengths; def __len__(self): return len(self.encoded_sentences); def __getitem__(self, index): ...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Creating a Model The Torch Model will have the following architecture:1. Embeddings Layer2. 1st LSTM Layer2. 1st Dense Fully Connected Layer3. ReLU Activation4. 2nd LSTM Layer5. Global Max-Average Pooling Layer6. 2nd Dense Fully Connected Layer
class Model(nn.Module): def __init__(self, embedding_matrix, hidden_unit = 64): super(Model, self).__init__(); vocab_size = embeddings_tensor.shape[0]; embedding_dim = embeddings_tensor.shape[1]; self.embedding_layer = nn.Embedding(vocab_size, embedding_dim); self.em...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
We use a Binary-Cross-Entropy Loss Function, and an Adam Optimizer with a 0.003 Learning Rate.
criterion = nn.BCEWithLogitsLoss(); optimizer = torch.optim.Adam(lr=0.003, params = model.parameters());
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Model TrainingNow we write the methods to iterate over the data to train and evaluate our model.
def train(nn_model, nn_optimizer, nn_criterion, data_loader, val_loader = None, num_epochs = 5, print_ratio = 0.1, verbose=True): print_every_step = int(print_ratio * len(train_loader)); if verbose: print('Training with model: '); print(nn_model); for epoch in range(num_epochs...
_____no_output_____
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Running Training on the Model
train(model, optimizer, criterion, train_loader) eval(model, criterion, val_loader)
EVAL: [ 764 16.99 sec] loss: 0.046 acc: 0.953 prec: 0.617 rec: 0.480 f1: 0.540
MIT
Model Training - Basic Model.ipynb
ravishchawla/QuorainsincereQuestions
Recurring Neural Networks with Keras Sentiment analysis from movie reviewsThis notebook is inspired by the imdb_lstm.py example that ships with Keras. But since I used to run IMDb's engineering department, I couldn't resist!It's actually a great example of using RNN's. The data set we're using consists of user-generat...
from tensorflow.keras.preprocessing import sequence from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Embedding from tensorflow.keras.layers import LSTM from tensorflow.keras.datasets import imdb
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now import our training and testing data. We specify that we only care about the 20,000 most popular words in the dataset in order to keep things somewhat managable. The dataset includes 5,000 training reviews and 25,000 testing reviews for some reason.
print('Loading data...') (x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=20000)
Loading data...
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Let's get a feel for what this data looks like. Let's look at the first training feature, which should represent a written movie review:
x_train[0]
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
That doesn't look like a movie review! But this data set has spared you a lot of trouble - they have already converted words to integer-based indices. The actual letters that make up a word don't really matter as far as our model is concerned, what matters are the words themselves - and our model needs numbers to work ...
y_train[0]
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
They are just 0 or 1, which indicates whether the reviewer said they liked the movie or not.So to recap, we have a bunch of movie reviews that have been converted into vectors of words represented by integers, and a binary sentiment classification to learn from.RNN's can blow up quickly, so again to keep things managab...
x_train = sequence.pad_sequences(x_train, maxlen=80) x_test = sequence.pad_sequences(x_test, maxlen=80)
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now let's set up our neural network model! Considering how complicated a LSTM recurrent neural network is under the hood, it's really amazing how easy this is to do with Keras.We will start with an Embedding layer - this is just a step that converts the input data into dense vectors of fixed size that's better suited f...
model = Sequential() model.add(Embedding(20000, 128)) model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2)) model.add(Dense(1, activation='sigmoid'))
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
As this is a binary classification problem, we'll use the binary_crossentropy loss function. And the Adam optimizer is usually a good choice (feel free to try others.)
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
_____no_output_____
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
Now we will actually train our model. RNN's, like CNN's, are very resource heavy. Keeping the batch size relatively small is the key to enabling this to run on your PC at all. In the real word of course, you'd be taking advantage of GPU's installed across many computers on a cluster to make this scale a lot better. War...
model.fit(x_train, y_train, batch_size=32, epochs=15, verbose=2, validation_data=(x_test, y_test))
C:\Users\Frank\AppData\Local\Enthought\Canopy\edm\envs\User\lib\site-packages\tensorflow\python\ops\gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory. "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
OK, let's evaluate our model's accuracy:
score, acc = model.evaluate(x_test, y_test, batch_size=32, verbose=2) print('Test score:', score) print('Test accuracy:', acc)
Test score: 0.9316869865119457 Test accuracy: 0.80904
MIT
surprise/source_code/DeepLearningIntro/Keras-RNN.ipynb
AlphaSunny/RecSys
First Python Notebook project- we perform a simple code and upload to git- taskwe will define a function of$$\phi(x) = \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}}$$
import numpy as np def phi(x): out = 1./np.sqrt(2.*np.pi)*np.exp(-x**2/2.) return out import matplotlib.pyplot as plt %matplotlib inline x_cod = np.linspace(-5,5,111) y_cod = phi(x_cod) plt.plot(x_cod,y_cod)
_____no_output_____
MIT
src/prj01.ipynb
hhk54250/20MA573-HHK
Predicting house prices: a regression exampleAnother common type of machine-learning problem is regression, which consists of predicting a continuous value instead of a discrete label: for instance, predicting the temperature tomorrow, given meteorological data; or predicting the time that a software project will take...
import os, time import tensorflow as tf physical_devices = tf.config.experimental.list_physical_devices('GPU') tf.config.experimental.set_memory_growth(physical_devices[0], True) tf.keras.backend.clear_session() from tensorflow.keras.datasets import boston_housing (train_data, train_targets), (test_data, test_target...
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
The prices are typically between 10,000 and 50,000. If that sounds cheap, remember that this was the mid-1970s, and these prices aren’t adjusted for inflation. Preparing the data It would be problematic to feed into a neural network values that all take wildly differ- ent ranges. The network might be able to automati...
mean = train_data.mean(axis=0) train_data -= mean std = train_data.std(axis=0) train_data /= std test_data -= mean test_data /= std
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Model Architecture Because so few samples are available, you’ll use a very small network with two hidden layers, each with 64 units. In general, the less training data you have, the worse overfitting will be, and using a small network is one way to mitigate overfitting.
from tensorflow.keras import models from tensorflow.keras import layers def build_model(): model = models.Sequential() model.add(layers.Dense(64, activation='relu', input_shape=(train_data.shape[1],))) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(1)) model.compile(optim...
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Validating your approach using K-fold validationTo evaluate your networ while you keep adjusting its parameters (such as the number of epochs used for training), you could split the data into a training set and a validation set, as you did in the previous examples. But because you have so few data points, the validati...
import numpy as np k = 4 num_val_samples = len(train_data) // k num_epochs = 100 all_scores = [] for i in range(k): print('processing fold #', i) val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] partial_trai...
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
Let's train the network for a little longer: 500 epochs
num_epochs = 500 all_mae_histories = [] data from partition #k for i in range(k): print('processing fold #', i) val_data = train_data[i * num_val_samples: (i + 1) * num_val_samples] val_targets = train_targets[i * num_val_samples: (i + 1) * num_val_samples] partial_train_data = np.concatenate([train_data[:i...
_____no_output_____
Apache-2.0
Training/RegressionExampleComplete.ipynb
fermaat/afi_deep_learning_intro
from google.colab import drive drive.mount('/content/gdrive') import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow.keras.models import Sequential import PIL from PIL import Image import time import os os.environ['TF_C...
_____no_output_____
MIT
speedlimits.ipynb
kimsejin111/git_test
model2.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) epochs=15 history=model2.fit( train_ds, validation_data=val_ds, epochs=epochs, ) acc_E=history.history['accuracy'] val_E_acc=history.history['val_accuracy'...
_____no_output_____
MIT
speedlimits.ipynb
kimsejin111/git_test
Scenario plots with single lines
output_names = ["notifications", "icu_occupancy"] scenario_x_min, scenario_x_max = 367, 920 sc_to_plot = [0, 1] legend = ["With vaccine", "Without vaccine"] lift_time = 731 text_font = 14 sc_colors = [COLOR_THEME[i] for i in scenario_list] sc_linestyles = ["solid"] * (len(scenario_list)) for output_type in ["median"...
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Make Adverse Effects figures
params = project.param_set.baseline.to_dict() ae_risk = { "AstraZeneca": params["vaccination_risk"]["tts_rate"], "mRNA": params["vaccination_risk"]["myocarditis_rate"] } agg_agegroups = ["10_14","15_19", "20_29", "30_39", "40_49", "50_59", "60_69", "70_plus"] text_font = 12 vacc_scenarios = { "mRNA": 2...
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Counterfactual no vaccine scenario
output_type = "uncertainty" output_names = ["notifications", "icu_occupancy", "accum_deaths"] sc_to_plot = [0, 1] x_min, x_max = 400, 670 vacc_start = 426 for output_name in output_names: axis = plot_outputs(output_type, output_name, sc_to_plot, sc_linestyles, sc_colors, False, x_min=400, x_max=670) y_max = plt...
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
number of lives saved
today = 660 # 21 Oct df = uncertainty_df[(uncertainty_df["type"] == "accum_deaths") & (uncertainty_df["quantile"] == 0.5) & (uncertainty_df["time"] == today)] baseline = float(df[df["scenario"] == 0]["value"]) counterfact = float(df[df["scenario"] == 1]["value"]) print(counterfact - baseline)
_____no_output_____
BSD-2-Clause-FreeBSD
notebooks/user/pjayasundara/AEFI_with_uncertainty.ipynb
monash-emu/AuTuMN
Captum Insights for Visual Question Answering This notebook provides a simple example for the [Captum Insights API](https://captum.ai/docs/captum_insights), which is an easy to use API built on top of Captum that provides a visualization widget.It is suggested to first read the multi-modal [tutorial](https://captum.ai...
import sys, os # Replace the placeholder strings with the associated # path for the root of pytorch-vqa and pytorch-resnet respectively PYTORCH_VQA_DIR = os.path.realpath("../../pytorch-vqa") PYTORCH_RESNET_DIR = os.path.realpath("../../pytorch-resnet") # Please modify this path to where it is located on your machin...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Now, we will import the necessary modules to run the code in this tutorial. Please make sure you have the [prerequisites to run captum](https://captum.ai/docs/getting_started), along with the pre-requisites to run this tutorial (as described in the first section).
import numpy as np import matplotlib.pyplot as plt from matplotlib.colors import LinearSegmentedColormap from PIL import Image import torch import torchvision import torchvision.transforms as transforms import torch.nn.functional as F try: import resnet # from pytorch-resnet except: print("please provide a v...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
VQA Model SetupLet's load the VQA model (again, please refer to the [model interpretation tutorial on VQA](https://captum.ai/tutorials/Multimodal_VQA_Interpret) if you want details)
saved_state = torch.load(VQA_MODEL_PATH, map_location=device) # reading vocabulary from saved model vocab = saved_state["vocab"] # reading word tokens from saved model token_to_index = vocab["question"] # reading answers from saved model answer_to_index = vocab["answer"] num_tokens = len(token_to_index) + 1 # read...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum
Let's modify the VQA model to use pytorch-resnet. Our model will be called `vqa_resnet`.
class ResNetLayer4(torch.nn.Module): def __init__(self): super().__init__() self.r_model = resnet.resnet152(pretrained=True) self.r_model.eval() self.r_model.to(device) self.buffer = None def save_output(module, input, output): self.buffer = output ...
_____no_output_____
BSD-3-Clause
tutorials/Multimodal_VQA_Captum_Insights.ipynb
doc22940/captum