markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Создадим простейшую сеть с новыми слоями: Convolutional - `nn.Conv2d` MaxPool - `nn.MaxPool2d`
nn_model = nn.Sequential( nn.Conv2d(3, 64, 3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(4), nn.Conv2d(64, 64, 3, padding=1), nn.ReLU(inplace=True), nn.MaxPool2d(4), Flattener(), nn.Linear(64*2*2, 10), ) ...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Восстановите функцию `compute_accuracy` из прошлого задания. Единственное отличие в новом - она должна передать данные на GPU прежде чем прогонять через модель. Сделайте это так же, как это делает функция `train_model`
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs): loss_history = [] train_history = [] val_history = [] for epoch in range(num_epochs): model.train() # Enter train mode loss_accum = 0 correct_samples = 0 total_samples = 0 ...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Аугментация данных (Data augmentation)В работе с изображениями одним из особенно важных методов является аугментация данных - то есть, генерация дополнительных данных для тренировки на основе изначальных. Таким образом, мы получаем возможность "увеличить" набор данных для тренировки, что ведет к лучшей работе сети.В...
tfs = transforms.Compose([ transforms.ColorJitter(hue=.50, saturation=.50), transforms.RandomHorizontalFlip(), transforms.RandomVerticalFlip(), transforms.RandomRotation(50, resample=PIL.Image.BILINEAR), transforms.ToTensor(), transforms.Normalize(mean=[0.43,0.44,0.47], st...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Визуализируем результаты агментации (вообще, смотреть на сгенерированные данные всегда очень полезно).
# TODO: Visualize some augmented images! # hint: you can create new datasets and loaders to accomplish this # Based on the visualizations, should we keep all the augmentations? tfs = transforms.Compose([ transforms.ColorJitter(hue=.20, saturation=.20), transforms.RandomHorizontalFlip(), transforms.RandomV...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Все ли агментации одинаково полезны на этом наборе данных? Могут ли быть среди них те, которые собьют модель с толку?Выберите из них только корректные
# TODO: tfs = transforms.Compose([ # TODO: Add good augmentations transforms.ToTensor(), transforms.Normalize(mean=[0.43,0.44,0.47], std=[0.20,0.20,0.20]) ]) # TODO create new instances of loaders with the augmentations you chose train_aug_loader = None # ...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
LeNetПопробуем имплементировать классическую архитектуру сверточной нейронной сети, предложенную Яном ЛеКуном в 1998 году. В свое время она достигла впечатляющих результатов на MNIST, посмотрим как она справится с SVHN?Она описана в статье ["Gradient Based Learning Applied to Document Recognition"](http://yann.lecun.c...
# TODO: Implement LeNet-like architecture for SVHN task lenet_model = nn.Sequential( ) lenet_model.type(torch.cuda.FloatTensor) lenet_model.to(device) loss = nn.CrossEntropyLoss().type(torch.cuda.FloatTensor) optimizer = optim.SGD(lenet_model.parameters(), lr=1e-1, weight_decay=1e-4) # Let's train it! loss_...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Подбор гиперпараметров
# The key hyperparameters we're going to tune are learning speed, annealing rate and regularization # We also encourage you to try different optimizers as well Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg']) RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history'...
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Свободное упражнение - догоним и перегоним LeNet!Попробуйте найти архитектуру и настройки тренировки, чтобы выступить лучше наших бейзлайнов.Что можно и нужно попробовать:- BatchNormalization (для convolution layers он в PyTorch называется [batchnorm2d](https://pytorch.org/docs/stable/nn.htmlbatchnorm2d))- Изменить ко...
best_model = None
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
Финальный аккорд - проверим лучшую модель на test setВ качестве разнообразия - напишите код для прогона модели на test set вы.В результате вы должны натренировать модель, которая покажет более **90%** точности на test set. Как водится, лучший результат в группе получит дополнительные баллы!
# TODO Write the code to compute accuracy on test set final_test_accuracy = 0.0 print("Final test accuracy - ", final_test_accuracy)
_____no_output_____
MIT
assignments/assignment3/PyTorch_CNN.ipynb
pavel2805/my_dlcoarse_ai
baxterのmap求める
import sympy as sy from sympy import sin, cos, pi, sqrt import math #from math import pi q = sy.Matrix(sy.MatrixSymbol('q', 7, 1)) L, h, H, L0, L1, L2, L3, L4, L5, L6, R = sy.symbols('L, h, H, L0, L1, L2, L3, L4, L5, L6, R') # L = 278e-3 # h = 64e-3 # H = 1104e-3 # L0 = 270.35e-3 # L1 = 69e-3 # L2 = 364.35e-3 # L3 = 69...
_____no_output_____
MIT
misc/baxter/derivation.ipynb
YoshimitsuMatsutaIe/rmp_test
Collapse all 2-cells
all_X,collapses,all_losses,total_loss,all_signals,phispsis= dmt.sequence_optimal_up_collapses(X=X,kX=kX,dimq=1,signal=s1,steps=120) colX=all_X[-1] colS=all_signals[-1] s0 = ['black']*len(X[0])#np.zeros(len(simplices[0])) f_X=all_X[-1] f_s=all_signals[-1] fig = plt.figure(figsize=(6,7)) ax = fig.add_subplot(111) dmtvis....
_____no_output_____
MIT
total-collapsing.ipynb
stefaniaebli/dmt-signal-processing
Randomly collapse 2-cells
all_X_rand,collapses_rand,all_losses_rand,total_loss_rand,all_signals_rand,phispsis_rand= dmt.sequence_optimal_up_collapses(X=X,kX=kX,dimq=1,signal=s1,steps=244,random=True) colX_rand=all_X_rand[-1] colS_rand=all_signals_rand[-1] dmtvis.plot_hodge_decomp(X,s1,kX,phispsis_rand,trange=30,type_collapse='up') plt.savefig('...
_____no_output_____
MIT
total-collapsing.ipynb
stefaniaebli/dmt-signal-processing
Comparing losses
def CI_plot_y(data, conf = .95): from scipy.stats import sem, t n = np.array(data).shape[0] std_err = sem(data,axis = 0) h = std_err * t.ppf((1 + .95) / 2, n - 1) return h typ=['normal','uniform','height','center'] steps=np.arange(244) s=[1,50,100,150,200,240] for j in typ: l=np.load('./data/da...
_____no_output_____
MIT
total-collapsing.ipynb
stefaniaebli/dmt-signal-processing
Object Detection Data Set (Pikachu)There are no small data sets, like MNIST or Fashion-MNIST, in the object detection field. In order to quickly test models, we are going to assemble a small data set. First, we generate 1000 Pikachu images of different angles and sizes using an open source 3D Pikachu model. Then, we c...
%matplotlib inline import d2l from mxnet import gluon, image import os # Save to the d2l package. def download_pikachu(data_dir): root_url = ('https://apache-mxnet.s3-accelerate.amazonaws.com/' 'gluon/dataset/pikachu/') dataset = {'train.rec': 'e6bcb6ffba1ac04ff8a9b1115e650af56ee969c8', ...
_____no_output_____
MIT
d2l-en/chapter_computer-vision/object-detection-dataset.ipynb
mru4913/Dive-into-Deep-Learning
Read the Data SetWe are going to read the object detection data set by creating the instance `ImageDetIter`. The "Det" in the name refers to Detection. We will read the training data set in random order. Since the format of the data set is RecordIO, we need the image index file `'train.idx'` to read random mini-batche...
# Save to the d2l package. def load_data_pikachu(batch_size, edge_size=256): """Load the pikachu dataset""" data_dir = '../data/pikachu' download_pikachu(data_dir) train_iter = image.ImageDetIter( path_imgrec=os.path.join(data_dir, 'train.rec'), path_imgidx=os.path.join(data_dir, 'train....
_____no_output_____
MIT
d2l-en/chapter_computer-vision/object-detection-dataset.ipynb
mru4913/Dive-into-Deep-Learning
Below, we read a mini-batch and print the shape of the image and label. The shape of the image is the same as in the previous experiment (batch size, number of channels, height, width). The shape of the label is (batch size, $m$, 5), where $m$ is equal to the maximum number of bounding boxes contained in a single image...
batch_size, edge_size = 32, 256 train_iter, _ = load_data_pikachu(batch_size, edge_size) batch = train_iter.next() batch.data[0].shape, batch.label[0].shape
_____no_output_____
MIT
d2l-en/chapter_computer-vision/object-detection-dataset.ipynb
mru4913/Dive-into-Deep-Learning
Graphic DataWe have ten images with bounding boxes on them. We can see that the angle, size, and position of Pikachu are different in each image. Of course, this is a simple man-made data set. In actual practice, the data is usually much more complicated.
imgs = (batch.data[0][0:10].transpose((0, 2, 3, 1))) / 255 axes = d2l.show_images(imgs, 2, 5, scale=2) for ax, label in zip(axes, batch.label[0][0:10]): d2l.show_bboxes(ax, [label[0][1:5] * edge_size], colors=['w'])
_____no_output_____
MIT
d2l-en/chapter_computer-vision/object-detection-dataset.ipynb
mru4913/Dive-into-Deep-Learning
The Goal of this Notebook is to predict Future Sales given historical data (daily granularity). This is a part of the kaggle competition "Predict Future Sales": https://www.kaggle.com/c/competitive-data-science-predict-future-sales/data Where more information about the problem, dataset and other solutions can be found....
# mount gdrive from google.colab import drive drive.mount('/gdrive') # cd to dir % cd '../gdrive/My Drive/self_teach/udacity_ml_eng_nanodegree' # Import Libraries import pandas as pd import numpy as np import warnings from sklearn.preprocessing import LabelEncoder # Visualisation Libraries import seaborn as sns import...
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Before we begin, Thanks to the following notebooks who I gained some ideas from in feature engineering/visualisations (and took code snippets from). I would suggest having a look at their notebooks and work also, and if you like it, give them a thumbs up on Kaggle to support their work :)):- https://www.kaggle.com/dla...
# Load in dataset (cast float64 -> float32 and int32 -> int16 to save memory) items = pd.read_csv('./data/competition_files/items.csv', dtype={'item_name': 'str', 'item_id': 'int16', 'item_category_id': 'int16'} ) shops = pd.read_csv('./data/competition_files/shops.csv', ...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Join the different data sets; merge onto train df
train = train.join( items, on='item_id', rsuffix='_').join( shops, on='shop_id', rsuffix='_').join( categories, on='item_category_id', rsuffix='_').drop( ['item_id_', 'shop_id_', 'item_category_id_'], axis=1 )
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Probe the train data, it appears that there are no nan data, or missing data, which is quite good.
print("----------Top-5- Record----------") print(train.head(5)) print("-----------Information-----------") print(train.info()) print("-----------Data Types-----------") print(train.dtypes) print("----------Missing value-----------") print(train.isnull().sum()) print("----------Null value-----------") print(train.isna()...
Min date from train set: 2013-01-01 Max date from train set: 2015-12-10
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Data is from 1st January 2013 to 10th Decemer 2015, as we expect So it turns out that a lot of data in the training set for columns "shop_id" and "item_id" does not appear in the test set. This could be perhaps because the item is no longer on sale as time goes on or shops have closed down or moved addresses. As we wan...
test_shop_ids = test['shop_id'].unique() test_item_ids = test['item_id'].unique() # Only shops that exist in test set. corrlate_train = train[train['shop_id'].isin(test_shop_ids)] # Only items that exist in test set. correlate_train = corrlate_train[corrlate_train['item_id'].isin(test_item_ids)] print('Initial data se...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
It appears we have 5 duplicated rows, let's look into these
print('Number of duplicates:', len(train[train.duplicated()]))
Number of duplicates: 5
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
The Itetm ID's are all the same, as well as the price for a number of them; other columns such as date, date_block_num look different. So this appears not to be a mistake. As there are only 5 duplicated rows, we will leave these in for now and deal with these later.
train[train.duplicated()]
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Plot the train data; look for outliers. It seems like there are a few with item price > 100000 and with item count per day > 1000. We will remove these from our training set.
plt.figure(figsize=(10,4)) plt.xlim(-100, 3000) sns.boxplot(x=train.item_cnt_day) plt.figure(figsize=(10,4)) plt.xlim(train.item_price.min(), train.item_price.max()*1.1) sns.boxplot(x=train.item_price) train = train[train.item_price<100000] train = train[train.item_cnt_day<1000] plt.figure(figsize=(10,4)) plt.xlim(-10...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Looking better after having removed outliers. Fill any item_price < 0 with the median item price median.
# Calculate the item price median median = train.item_price.median() print("Item Price Median = {}".format(median)) train.loc[train.item_price<0, 'item_price'] = median # Double there are no item price rows < 0 train.loc[train.item_price<0, 'item_price']
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Count number of rows with item_cnt_day < 0; seems too many to be anomalous and could be an important feature. We will leave this in our dataset.
len(train.loc[train.item_cnt_day<0, 'item_cnt_day'])
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Some shops are duplicates of each other (according to name), we will fix these in our train and test set.
# Якутск Орджоникидзе, 56 train.loc[train.shop_id == 0, 'shop_id'] = 57 test.loc[test.shop_id == 0, 'shop_id'] = 57 # Якутск ТЦ "Центральный" train.loc[train.shop_id == 1, 'shop_id'] = 58 test.loc[test.shop_id == 1, 'shop_id'] = 58 # Жуковский ул. Чкалова 39м² train.loc[train.shop_id == 10, 'shop_id'] = 11 test.loc[t...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Process "Shop_name" column -> shop name begins with city name.
# Fix erroneous shop name title train.loc[train.shop_name == 'Сергиев Посад ТЦ "7Я"', 'shop_name'] = 'СергиевПосад ТЦ "7Я"' # Create a column for city train['city'] = train['shop_name'].str.split(' ').map(lambda x: x[0]) train.head() # Fix a city name (typo) train.loc[train.city == '!Якутск', 'city'] = 'Якутск' # Encod...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Each category name contains type and subtype in its name. Treat this similarly as to how we treated shop name, split into separate columns and encode into labels (one hot encoding).
# Create separate column with split category name train['split_category_name'] = train['item_category_name'].str.split('-') train.head() # Make column for category type and encode train['item_category_type'] = train['split_category_name'].map(lambda x : x[0].strip()) train['item_category_type_code'] = LabelEncoder().fi...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
We can now drop the following columns, having captured and encoded the necessary information from them:- shop_name- item_category_name- split_category_name- item_category_type- item_category_subtype
train = train.drop(['shop_name', 'item_category_name', 'split_category_name', 'item_category_type', 'item_category_subtype', ], axis = 1) train.head()
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Looking at item name, perhaps we can reduce the number of unique types, as there are too many at the moment which our model might struggle with, so we will try to categorise some of these by just taking the first part of an item name and encoding this.
print("Number of unique Item names = {}".format(len(train.item_name.unique()))) # Split item name, extracting first word of the string train['item_name_split'] = train['item_name'].str.split(' ').map(lambda x : x[0].strip()) train.head() print("Number of unique Item First Words = {}".format(len(train['item_name_split']...
Number of unique Item First Words = 1590
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
This seems substantial enough, so we will encode this once again into another column.
train['item_name_code'] = LabelEncoder().fit_transform(train['item_name_split']) train.head()
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
And now we can drop the following columns:- item_name- item_name_split- city (forgot to drop in last round)
train = train.drop(['item_name', 'item_name_split', 'city' ], axis = 1) train.head()
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
So the features above are the ones so far deemed as useful and thus are kept on. We will group by month into dataframe; then by the other columns and then aggregate the item price and count, determining the mean average and sum per month.
print(len(train)) # Group by month (date_block_num) # Could do more complex, just want something very basic to aggregate train_by_month = train.sort_values('date').groupby([ 'date_block_num', 'item_category_type_cod...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
As we have to apply predictions to the test set, we must ensure all possible combinations of "shop_id" and "item_id" are covered. To do this, we will loop through all possible combinations in our test set and append to an empty dataframe. Then we will merge that empty dataframe to our main dataframe and fill in missing...
# Get all unique shop id's and item id's shop_ids = test['shop_id'].unique() item_ids = test['item_id'].unique() # Initialise empty df empty_df = [] # Loop through months and append to dataframe for i in range(34): for item in item_ids: for shop in shop_ids: empty_df.append([i, shop, item]) # Turn...
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
The fact we have so many na is quiet concerning. Perhaps many more item_id or shop_id values were added in the most recent month (test data) that is not included in the training data. Whilst there may be better ways of dealing with this, we will be fill the missing na records with 0 and progress.
# Filll missing records with na train_by_month.fillna(0, inplace=True) train_by_month.isna().sum() train_by_month.describe()
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
In this first feature-engineering notebook, we have inspected the data, removed outliers and identified features (as well as engineer others) we would like to use for further feature engineering to train our model with. As our feature-engineering steps are quite numerous, we will split it up into separate notebooks, mo...
# Save this as a csv train_by_month.to_csv('./data/output/processed_data_pt1.csv', index=False, header=True)
_____no_output_____
MIT
jupyter_notebooks/feature_engineering_pt1.ipynb
StevenVuong/Udacity-ML-Engineer-Nanodegree-Capstone-Project
Задание 1.2 - Линейный классификатор (Linear classifier)В этом задании мы реализуем другую модель машинного обучения - линейный классификатор. Линейный классификатор подбирает для каждого класса веса, на которые нужно умножить значение каждого признака и потом сложить вместе.Тот класс, у которого эта сумма больше, и я...
import numpy as np import matplotlib.pyplot as plt %matplotlib inline %load_ext autoreload %autoreload 2 from dataset import load_svhn, random_split_train_val from gradient_check_solution import check_gradient from metrics_solution import multiclass_accuracy import linear_classifer_solution as linear_classifer
_____no_output_____
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Как всегда, первым делом загружаем данныеМы будем использовать все тот же SVHN.
def prepare_for_linear_classifier(train_X, test_X): train_flat = train_X.reshape(train_X.shape[0], -1).astype(np.float) / 255.0 test_flat = test_X.reshape(test_X.shape[0], -1).astype(np.float) / 255.0 # Subtract mean mean_image = np.mean(train_flat, axis = 0) train_flat -= mean_image test_f...
_____no_output_____
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Играемся с градиентами!В этом курсе мы будем писать много функций, которые вычисляют градиенты аналитическим методом.Необходимым инструментом во время реализации кода, вычисляющего градиенты, является функция его проверки. Эта функция вычисляет градиент численным методом и сверяет результат с градиентом, вычисленным а...
# TODO: Implement gradient check function def sqr(x): return x*x, 2*x check_gradient(sqr, np.array([3.0])) def array_sum(x): assert x.shape == (2,), x.shape return np.sum(x), np.ones_like(x) check_gradient(array_sum, np.array([3.0, 2.0])) def array_2d_sum(x): assert x.shape == (2,2) return np.su...
Gradient check passed! Gradient check passed! Gradient check passed!
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Теперь реализуем функцию softmax, которая получает на вход оценки для каждого класса и преобразует их в вероятности от 0 до 1:![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/e348290cf48ddbb6e9a6ef4e39363568b67c09d3)**Важно:** Практический аспект вычисления этой функции заключается в том, что в ней учав...
# TODO Implement softmax and cross-entropy for single sample probs = linear_classifer.softmax(np.array([-10, 0, 10])) # Make sure it works for big numbers too! probs = linear_classifer.softmax(np.array([1000, 0, 0])) assert np.isclose(probs[0], 1.0)
_____no_output_____
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Кроме этого, мы реализуем cross-entropy loss, которую мы будем использовать как функцию ошибки (error function).В общем виде cross-entropy определена следующим образом:![image](https://wikimedia.org/api/rest_v1/media/math/render/svg/0cb6da032ab424eefdca0884cd4113fe578f4293)где x - все классы, p(x) - истинная вероятност...
probs = linear_classifer.softmax(np.array([-5, 0, 5])) linear_classifer.cross_entropy_loss(probs, 1)
_____no_output_____
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
После того как мы реализовали сами функции, мы можем реализовать градиент.Оказывается, что вычисление градиента становится гораздо проще, если объединить эти функции в одну, которая сначала вычисляет вероятности через softmax, а потом использует их для вычисления функции ошибки через cross-entropy loss.Эта функция `sof...
# TODO Implement combined function or softmax and cross entropy and produces gradient loss, grad = linear_classifer.softmax_with_cross_entropy(np.array([1, 0, 0]), 1) check_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, 1), np.array([1, 0, 0], np.float))
Gradient check passed!
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
В качестве метода тренировки мы будем использовать стохастический градиентный спуск (stochastic gradient descent или SGD), который работает с батчами сэмплов. Поэтому все наши фукнции будут получать не один пример, а батч, то есть входом будет не вектор из `num_classes` оценок, а матрица размерности `batch_size, num_cl...
# TODO Extend combined function so it can receive a 2d array with batch of samples # Test batch_size = 1 batch_size = 1 predictions = np.zeros((batch_size, 3)) target_index = np.ones(batch_size, np.int) check_gradient(lambda x: linear_classifer.softmax_with_cross_entropy(x, target_index), predictions) # Test batch_si...
Gradient check passed! Gradient check passed!
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Наконец, реализуем сам линейный классификатор!softmax и cross-entropy получают на вход оценки, которые выдает линейный классификатор.Он делает это очень просто: для каждого класса есть набор весов, на которые надо умножить пиксели картинки и сложить. Получившееся число и является оценкой класса, идущей на вход softmax...
# TODO Implement linear_softmax function that uses softmax with cross-entropy for linear classifier batch_size = 2 num_classes = 2 num_features = 3 np.random.seed(42) W = np.random.randint(-1, 3, size=(num_features, num_classes)).astype(np.float) X = np.random.randint(-1, 3, size=(batch_size, num_features)).astype(np.f...
Gradient check passed!
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
И теперь регуляризацияМы будем использовать L2 regularization для весов как часть общей функции ошибки.Напомним, L2 regularization определяется какl2_reg_loss = regularization_strength * sumij W[i, j]2Реализуйте функцию для его вычисления и вычисления соотвествующих градиентов.
# TODO Implement l2_regularization function that implements loss for L2 regularization linear_classifer.l2_regularization(W, 0.01) check_gradient(lambda w: linear_classifer.l2_regularization(w, 0.01), W)
Gradient check passed!
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Тренировка! Градиенты в порядке, реализуем процесс тренировки!
# TODO: Implement LinearSoftmaxClassifier.fit function classifier = linear_classifer.LinearSoftmaxClassifier() loss_history = classifier.fit(train_X, train_y, epochs=30, learning_rate=1e-3, batch_size=300, reg=1e1) # let's look at the loss history! plt.plot(loss_history); # Let's check how it performs on validation set...
Accuracy: 0.145 Epoch 0, loss: 2.301971 Epoch 1, loss: 2.301977 Epoch 2, loss: 2.301983 Epoch 3, loss: 2.301990 Epoch 4, loss: 2.301970 Epoch 5, loss: 2.301979 Epoch 6, loss: 2.301968 Epoch 7, loss: 2.301989 Epoch 8, loss: 2.301976 Epoch 9, loss: 2.301980 Epoch 10, loss: 2.301986 Epoch 11, loss: 2.301982 Epoch 12, los...
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Как и раньше, используем кросс-валидацию для подбора гиперпараметтов.В этот раз, чтобы тренировка занимала разумное время, мы будем использовать только одно разделение на тренировочные (training) и проверочные (validation) данные.Теперь нам нужно подобрать не один, а два гиперпараметра! Не ограничивайте себя изначальн...
import itertools num_epochs = 200 batch_size = 300 learning_rates = [1e-3, 1e-4, 1e-5] reg_strengths = [1e-4, 1e-5, 1e-6] best_classifier = None best_val_accuracy = -float("inf") # TODO use validation set to find the best hyperparameters # hint: for best results, you might need to try more values for learning rate a...
best validation accuracy achieved: 0.215000
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Какой же точности мы добились на тестовых данных?
test_pred = best_classifier.predict(test_X) test_accuracy = multiclass_accuracy(test_pred, test_y) print('Linear softmax classifier test set accuracy: %f' % (test_accuracy, ))
_____no_output_____
MIT
assignments/assignment1/Linear classifier_solution.ipynb
tbb/dlcourse_ai
Is the SED Correct?In the circle test, the SFH is totatlly bonkers. We just can not get the correct SFH back out with MCMC. Is the MCMC getting a good fit?
import numpy as np import matplotlib.pyplot as plt wavelengths = [3551, 4686, 6166, 7480, 8932] # for u, g, r, i, z filters filters = ['u', 'g', 'r', 'i', 'z']
_____no_output_____
MIT
figures/Check-SED.ipynb
benjaminrose/SNIa-Local-Environments
Input TextSo from:logzsol | dust2| $\tau$| tStart| sfTrans| sfSlope--------|------|----|-------|--------|----------0.5| 0.1| 0.5| 1.5| 9.0| -1.0we getu| g| r| i| z-|--|--|--|--45.36|43.76|42.99|42.67|42.39This SED gets 25 magnitues subtracted from (`c` paramter in fit) it get it to a resonable magnitude. FSPS only cal...
input_sed = np.array([45.36, 43.76, 42.99, 42.67, 42.39]) input_c = -25 fit1_sed = np.array([43.31, 42.06, 41.76, 41.67, 41.62]) fit1_c = -23.48 fit2_sed = np.array([42.28, 41.43, 41.23, 41.01, 40.99]) fit2_c = -22.85 fit3_sed = np.array([41.53, 40.70, 40.55, 40.33, 40.30]) fit3_c = -22.1 plt.figure('fit test') fig, ax...
_____no_output_____
MIT
figures/Check-SED.ipynb
benjaminrose/SNIa-Local-Environments
Check Newer ResutlsOn 2017-08-24 I re-ran the whole analaysis method and it got a closer answer on the circle test (particually with the log(Z_sol)) but it was not perfect. Here I want to compare the SED outputed results.
fit0824_sed = np.array([42.29, 41.43, 41.21, 40.98, 40.93]) fit0824_c = -25.70 plt.figure('newer fit test') fig, ax = plt.subplots(1,1) ax.plot(wavelengths, input_sed+input_c, label='Input Values') # ax.plot(wavelengths, [20.36, 18.76, 17.99, 17.67, 17.39]) # the in text file numbers. ax.plot(wavelengths, fit1_sed+f...
_____no_output_____
MIT
figures/Check-SED.ipynb
benjaminrose/SNIa-Local-Environments
**Summary Of Findings**:It was found that wildfire frequency across the United State has been increasing in the past decade. Although fire and fire damage was generally localized to mostly the west coast in the past, fire frequency has been gradually increasing in states east of it in the continental US; in 2021, mid...
!apt-get install openjdk-8-jdk-headless -qq > /dev/null !wget https://dlcdn.apache.org/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz
--2021-12-14 04:34:01-- https://dlcdn.apache.org/spark/spark-3.2.0/spark-3.2.0-bin-hadoop3.2.tgz Resolving dlcdn.apache.org (dlcdn.apache.org)... 151.101.2.132, 2a04:4e42::644 Connecting to dlcdn.apache.org (dlcdn.apache.org)|151.101.2.132|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 30096...
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
!tar xvzf spark-3.2.0-bin-hadoop3.2.tgz !ls /content/spark-3.2.0-bin-hadoop3.2 # Set the ‘environment’ path import os #os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["SPARK_HOME"] = "/content/spark-3.2.0-bin-hadoop3.2" !pip install -q findspark import findspark findspark.init() from pyspark.s...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
Fire Frequency By Year
#Fires not considered "wildfire" are first filtered out #locTime will be used to focus on the frequency of wildfires per state #POOState - Location of wildfire at time of discovery #FireDiscoveryDateTime - Date when the fire was discovered. locatData = locatData.filter(locatData["IncidentTypeCategory"] == "WF") locTim...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
A significant difference between the wildfire frequency was found between 2013 and 2014; it is assumed the years before 2014 had incomplete data.
locTime.groupBy("Year").count().orderBy("year").show()
+----+-----+ |Year|count| +----+-----+ |2003| 1| |2004| 1| |2008| 1| |2009| 1| |2010| 2| |2014|12634| |2015|19633| |2016|19798| |2017|25114| |2018|22627| |2019|25451| |2020|33348| |2021|34488| +----+-----+
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
Number of Fires across the US per state
#To gain insights into the US results, areas outside the US are filtered out. locTime = locTime.filter(locTime["Year"] > 2013) locTime = locTime.filter(locTime["State Occurred"].contains("US")) locTime = locTime.withColumn("State Occurred",substring(locTime["State Occurred"],4,6)) locTime.show() totalFiresPerState = lo...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
Findings:From the figure above, it can be seen that fires in the last decade have most occurred in the western portion of the United States, and have been mostly prevalent on the west coast as well as Montana and Arizona. Number of Fires Per Year Per State
firePerState = locTime.filter(locTime["year"].isNotNull())\ .groupBy("year",'State Occurred').count().orderBy("Year") firePerState.show() import plotly.express as px import pandas as pd fig = px.choropleth(firePerState.toPandas(), locations='State Occurred',locationmode = "USA-states",color = "count",range_color = [0,...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
**Findings** : From the above, figure, we see a general rise in wildfire occurences over the years. The west coast has consistently had the highest number of fires over the years. Originally the majority of fires had been originating in the west coast, but states east of it have steadily seen increasing occurences.In 2...
#Primarily tracks historical fire Perimeters from 2000-2018 oldPerimData = spark.read.option("header",True) \ .option("inferSchema", True) \ .csv("Historic_GeoMAC_Perimeters_Combined_2000-2018.csv") #Meaningful data is cleaned and selected oldPerimTime = oldPerimData.select((oldPerimData["state"]).alias("State Occurred...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
**Findings**: In the above figure, it is found that the total area damaged by wildfires has been inconsistent throughout the past two decades; while fires are increasing in frequency, the area affected does not necessarily increase. Total Area Burned Per State
damagePerState = oldPerimTime.union(recentTime) damagePerStateOverall= damagePerState.groupBy("State Occurred").agg(sum('area(acres)').alias("total area burned (acres)")) import plotly.express as px import pandas as pd fig = px.choropleth(damagePerStateOverall.toPandas(), locations='State Occurred',locationmode = "USA-...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
**Findings**: The above map shows that the most significant damage was found on the west coast; this is similar and supports the findings found in the occurences map. Some States that had a high number of fire occurences such as Texas have not seen proportional quantities of acres burned. In contrast to its low number ...
damagePerStateYearly= damagePerState.groupBy("year","State Occurred").agg(sum('area(acres)').alias("total area burned (acres)")).orderBy("year") import plotly.express as px import pandas as pd fig = px.choropleth(damagePerStateYearly.toPandas(), locations='State Occurred',locationmode = "USA-states",color = "total area...
_____no_output_____
MIT
006_bd_proj_dennis.ipynb
sh5864/bigdata-proj
Write a pandas dataframe to disk as gunzip compressed csv- df.to_csv('dfsavename.csv.gz', compression='gzip') Read from disk- df = pd.read_csv('dfsavename.csv.gz', compression='gzip') Magic useful- %%timeit for the whole cell- %timeit for the specific line- %%latex to render the cell as a block of latex- %prun and %%p...
DATASET_PATH = '/media/rs/0E06CD1706CD0127/Kapok/WSDM/' HDF_FILENAME = DATASET_PATH + 'music_info.h5' HDF_TRAIN_FEATURE_FILENAME = DATASET_PATH + 'music_train_feature_part.h5' HDF_TEST_FEATURE_FILENAME = DATASET_PATH + 'music_test_feature_part.h5' def set_logging(logger_name, logger_file_name): log = logging.getLog...
_____no_output_____
MIT
MusicRecommendation/.ipynb_checkpoints/TestHDFTables-checkpoint.ipynb
HiKapok/KaggleCompetitions
The lidar system, data (1 of 2 datasets)========================================Generate a chart of the data recorded by the lidar system
import numpy as np import matplotlib.pyplot as plt waveform_1 = np.load('waveform_1.npy') t = np.arange(len(waveform_1)) fig, ax = plt.subplots(figsize=(8, 6)) plt.plot(t, waveform_1) plt.xlabel('Time [ns]') plt.ylabel('Amplitude [bins]') plt.show()
_____no_output_____
CC-BY-4.0
_downloads/plot_optimize_lidar_data.ipynb
scipy-lectures/scipy-lectures.github.com
Copyright 2020 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Creating Keras Models with TFL Layers View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook OverviewYou can use TFL Keras layers to construct Keras models with monotonicity and other shape constraints. This example builds and trains a calibrated lattice model f...
#@test {"skip": true} !pip install tensorflow-lattice pydot
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Importing required packages:
import tensorflow as tf import logging import numpy as np import pandas as pd import sys import tensorflow_lattice as tfl from tensorflow import feature_column as fc logging.disable(sys.maxsize)
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Downloading the UCI Statlog (Heart) dataset:
# UCI Statlog (Heart) dataset. csv_file = tf.keras.utils.get_file( 'heart.csv', 'http://storage.googleapis.com/download.tensorflow.org/data/heart.csv') training_data_df = pd.read_csv(csv_file).sample( frac=1.0, random_state=41).reset_index(drop=True) training_data_df.head()
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Setting the default values used for training in this guide:
LEARNING_RATE = 0.1 BATCH_SIZE = 128 NUM_EPOCHS = 100
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Sequential Keras ModelThis example creates a Sequential Keras model and only uses TFL layers.Lattice layers expect `input[i]` to be within `[0, lattice_sizes[i] - 1.0]`, so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the calibration layers.
# Lattice layer expects input[i] to be within [0, lattice_sizes[i] - 1.0], so lattice_sizes = [3, 2, 2, 2, 2, 2, 2]
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We use a `tfl.layers.ParallelCombination` layer to group together calibration layers which have to be executed in parallel in order to be able to create a Sequential model.
combined_calibrators = tfl.layers.ParallelCombination()
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We create a calibration layer for each feature and add it to the parallel combination layer. For numeric features we use `tfl.layers.PWLCalibration`, and for categorical features we use `tfl.layers.CategoricalCalibration`.
# ############### age ############### calibrator = tfl.layers.PWLCalibration( # Every PWLCalibration layer must have keypoints of piecewise linear # function specified. Easiest way to specify them is to uniformly cover # entire input range by using numpy.linspace(). input_keypoints=np.linspace( ...
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We then create a lattice layer to nonlinearly fuse the outputs of the calibrators.Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonici...
lattice = tfl.layers.Lattice( lattice_sizes=lattice_sizes, monotonicities=[ 'increasing', 'none', 'increasing', 'increasing', 'increasing', 'increasing', 'increasing' ], output_min=0.0, output_max=1.0)
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We can then create a sequential model using the combined calibrators and lattice layers.
model = tf.keras.models.Sequential() model.add(combined_calibrators) model.add(lattice)
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Training works the same as any other keras model.
features = training_data_df[[ 'age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg' ]].values.astype(np.float32) target = training_data_df[['target']].values.astype(np.float32) model.compile( loss=tf.keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adagrad(learning_rate=LEARNING_RATE)) mod...
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Functional Keras ModelThis example uses a functional API for Keras model construction.As mentioned in the previous section, lattice layers expect `input[i]` to be within `[0, lattice_sizes[i] - 1.0]`, so we need to define the lattice sizes ahead of the calibration layers so we can properly specify output range of the ...
# We are going to have 2-d embedding as one of lattice inputs. lattice_sizes = [3, 2, 2, 3, 3, 2, 2]
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
For each feature, we need to create an input layer followed by a calibration layer. For numeric features we use `tfl.layers.PWLCalibration` and for categorical features we use `tfl.layers.CategoricalCalibration`.
model_inputs = [] lattice_inputs = [] # ############### age ############### age_input = tf.keras.layers.Input(shape=[1], name='age') model_inputs.append(age_input) age_calibrator = tfl.layers.PWLCalibration( # Every PWLCalibration layer must have keypoints of piecewise linear # function specified. Easiest way t...
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We then create a lattice layer to nonlinearly fuse the outputs of the calibrators.Note that we need to specify the monotonicity of the lattice to be increasing for required dimensions. The composition with the direction of the monotonicity in the calibration will result in the correct end-to-end direction of monotonici...
lattice = tfl.layers.Lattice( lattice_sizes=lattice_sizes, monotonicities=[ 'increasing', 'none', 'increasing', 'increasing', 'increasing', 'increasing', 'increasing' ], output_min=0.0, output_max=1.0, name='lattice', )( lattice_inputs)
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
To add more flexibility to the model, we add an output calibration layer.
model_output = tfl.layers.PWLCalibration( input_keypoints=np.linspace(0.0, 1.0, 5), name='output_calib', )( lattice)
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
We can now create a model using the inputs and outputs.
model = tf.keras.models.Model( inputs=model_inputs, outputs=model_output) tf.keras.utils.plot_model(model, rankdir='LR')
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Training works the same as any other keras model. Note that, with our setup, input features are passed as separate tensors.
feature_names = ['age', 'sex', 'cp', 'trestbps', 'chol', 'fbs', 'restecg'] features = np.split( training_data_df[feature_names].values.astype(np.float32), indices_or_sections=len(feature_names), axis=1) target = training_data_df[['target']].values.astype(np.float32) model.compile( loss=tf.keras.losses....
_____no_output_____
Apache-2.0
docs/tutorials/keras_layers.ipynb
sarvex/lattice-1
Common Regression class
class Regression: def __init__(self, learning_rate, iteration, regularization): """ :param learning_rate: A samll value needed for gradient decent, default value id 0.1. :param iteration: Number of training iteration, default value is 10,000. """ self.m = None self.n ...
_____no_output_____
MIT
MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb
pavi-ninjaac/Machine_Learing_sratch
Data Creation
# Define the traning data. X, y = make_regression(n_samples=50000, n_features=8) # Chnage the shape of the target to 1 dimentional array. y = y[:, np.newaxis] print("="*100) print("Number of training data samples-----> {}".format(X.shape[0])) print("Number of training features --------> {}".format(X.shape[1])) print(...
_____no_output_____
MIT
MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb
pavi-ninjaac/Machine_Learing_sratch
Polynomial Regression from Scratch
def PolynomialFeature(X, degree): """ It is type of feature engineering ---> adding some more features based on the exisiting features by squaring or cubing. :param X: data need to be converted. :param degree: int- The degree of the polynomial that the features X will be transformed to. """ ...
==================================================================================================== The Cost function for the iteration 10----->2524.546198902789 :) The Cost function for the iteration 20----->313.8199639696676 :) The Cost function for the iteration 30----->39.17839267886082 :) The Cost function for th...
MIT
MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb
pavi-ninjaac/Machine_Learing_sratch
Polynomial Regression using scikit-learn for comparision
from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score # data is already defined, going to use the same data for comparision. print("="*100) print("Number of training data samples-----> {}".format(X.shape[0])) print("Number of trainin...
==================================================================================================== R2 score of the model is 1.0
MIT
MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb
pavi-ninjaac/Machine_Learing_sratch
Polynomial Ridge Regression from scratch
class PolynamialRidgeRegression(Regression): """ Polynomail Ridge Regression is basically polynomial regression with l2 regularization. """ def __init__(self, learning_rate, iteration, degree, lamda): """ :param learning_rate: [range from 0 to infinity] the stpe distance used while doing...
==================================================================================================== The Cost function for the iteration 10----->4178.872832133191 :) The Cost function for the iteration 20----->2887.989505020741 :) The Cost function for the iteration 30----->2785.6247039737964 :) The Cost function for t...
MIT
MachineLearning/supervised_machine_learning/Polinamial_and_PlynomialRidge_Regression.ipynb
pavi-ninjaac/Machine_Learing_sratch
Lists from: [HackerRank](https://www.hackerrank.com/challenges/python-lists/problem) - (easy)Consider a list (list = []). You can perform the following commands:insert `i`, `e`: Insert integer at position. print(): Print the list. remove `e`: Delete the first occurrence of integer. append `e`: Insert integer at t...
N = int(input()) ls = [] for i in range(N): n = input() a = n.split() cmd = a[0] if cmd == "insert": ls.insert(int(a[1]), int(a[2])) elif cmd == "remove": ls.remove(int(a[1])) elif cmd == "append": ls.append(int(a[1])) elif cmd == "sort": ls.sort() elif cm...
12 insert 0 5 insert 1 10 insert 0 6 print [6, 5, 10] remove 6 append 9 append 1 sort print [1, 5, 9, 10] pop reverse print [9, 5, 1]
MIT
9_coding quizzes/05_list_HackerRank.ipynb
lucaseo/TIL
Copyright 2018 The AdaNet Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
The AdaNet objective Run in Google Colab View source on GitHub One of key contributions from *AdaNet: Adaptive Structural Learning of NeuralNetworks* [[Cortes et al., ICML 2017](https://arxiv.org/abs/1607.01097)] isdefining an algorithm that aims to directly minimize the DeepBoostgeneralization bound fr...
from __future__ import absolute_import from __future__ import division from __future__ import print_function import functools import adanet import tensorflow as tf # The random seed to use. RANDOM_SEED = 42
_____no_output_____
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
Boston Housing datasetIn this example, we will solve a regression task known as the [Boston Housing dataset](https://www.cs.toronto.edu/~delve/data/boston/bostonDetail.html) to predict the price of suburban houses in Boston, MA in the 1970s. There are 13 numerical features, the labels are in thousands of dollars, and ...
(x_train, y_train), (x_test, y_test) = ( tf.keras.datasets.boston_housing.load_data()) print(x_test.shape) print(x_test[0]) print(y_test.shape) print(y_test[0])
(102, 13) [ 18.0846 0. 18.1 0. 0.679 6.434 100. 1.8347 24. 666. 20.2 27.25 29.05 ] (102,) 7.2
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
Supply the data in TensorFlowOur first task is to supply the data in TensorFlow. Using thetf.estimator.Estimator convention, we will define a function that returns aninput_fn which returns feature and label Tensors.We will also use the tf.data.Dataset API to feed the data into our models.Also, as a preprocessing step,...
FEATURES_KEY = "x" def input_fn(partition, training, batch_size): """Generate an input function for the Estimator.""" def _input_fn(): if partition == "train": dataset = tf.data.Dataset.from_tensor_slices(({ FEATURES_KEY: tf.log1p(x_train) }, tf.log1p(y_train))) else: dataset...
_____no_output_____
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
Define the subnetwork generatorLet's define a subnetwork generator similar to the one in[[Cortes et al., ICML 2017](https://arxiv.org/abs/1607.01097)] and in`simple_dnn.py` which creates two candidate fully-connected neural networks ateach iteration with the same width, but one an additional hidden layer. To makeour g...
_NUM_LAYERS_KEY = "num_layers" class _SimpleDNNBuilder(adanet.subnetwork.Builder): """Builds a DNN subnetwork for AdaNet.""" def __init__(self, optimizer, layer_size, num_layers, learn_mixture_weights, seed): """Initializes a `_DNNBuilder`. Args: optimizer: An `Optimizer` instance f...
_____no_output_____
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
Train and evaluateNext we create an `adanet.Estimator` using the `SimpleDNNGenerator` we just defined.In this section we will show the effects of two hyperparamters: **learning mixture weights** and **complexity regularization**.On the righthand side you will be able to play with the hyperparameters of this model. Unt...
#@title AdaNet parameters LEARNING_RATE = 0.001 #@param {type:"number"} TRAIN_STEPS = 100000 #@param {type:"integer"} BATCH_SIZE = 32 #@param {type:"integer"} LEARN_MIXTURE_WEIGHTS = False #@param {type:"boolean"} ADANET_LAMBDA = 0 #@param {type:"number"} BOOSTING_ITERATIONS = 5 #@param {type:"integer"} def tr...
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmplcezpthw INFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_experimental_distribute': None, '_service': None, '_task_id': 0, '_is_chief': True, '_master': '', '_evaluation_master': '', '_train_distribute': None, '_model_dir': '/tmp/tmpl...
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
These hyperparameters preduce a model that achieves **0.0348** MSE on the testset. Notice that the ensemble is composed of 5 subnetworks, each one a hiddenlayer deeper than the previous. The most complex subnetwork is made of 5 hiddenlayers.Since `SimpleDNNGenerator` produces subnetworks of varying complexity, and ourm...
#@test {"skip": true} results, _ = train_and_evaluate(learn_mixture_weights=True) print("Loss:", results["average_loss"]) print("Uniform average loss:", results["average_loss/adanet/uniform_average_ensemble"]) print("Architecture:", ensemble_architecture(results))
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpsbdccn23 INFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_experimental_distribute': None, '_service': None, '_task_id': 0, '_is_chief': True, '_master': '', '_evaluation_master': '', '_train_distribute': None, '_model_dir': '/tmp/tmps...
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
Learning the mixture weights produces a model with **0.0449** MSE, a bit worsethan the uniform average model, which the `adanet.Estimator` always compute as abaseline. The mixture weights were learned without regularization, so theylikely overfit to the training set.Observe that AdaNet learned the same ensemble composi...
#@test {"skip": true} results, _ = train_and_evaluate(learn_mixture_weights=True, adanet_lambda=.015) print("Loss:", results["average_loss"]) print("Uniform average loss:", results["average_loss/adanet/uniform_average_ensemble"]) print("Architecture:", ensemble_architecture(results))
WARNING:tensorflow:Using temporary folder as model directory: /tmp/tmpyxwongpm INFO:tensorflow:Using config: {'_save_checkpoints_secs': None, '_experimental_distribute': None, '_service': None, '_task_id': 0, '_is_chief': True, '_master': '', '_evaluation_master': '', '_train_distribute': None, '_model_dir': '/tmp/tmpy...
MIT
frameworks/tensorflow/adanet_objective.ipynb
jiankaiwang/sophia.ml
load test data and loc_map
test = np.load("checkpt/test_data.npy") loc_map = np.load("checkpt/test_loc_map.npy") test_label = np.loadtxt("checkpt/test_label.txt") test_label.shape
_____no_output_____
CC-BY-4.0
CNN/Heatmap_demo.ipynb
ucl-exoplanets/DI-Project
read checkpoint
model = load_model("checkpt/ckt/checkpt_0.h5") model.summary() pred = model.predict(test)
_____no_output_____
CC-BY-4.0
CNN/Heatmap_demo.ipynb
ucl-exoplanets/DI-Project
get True Positive
## .argmax(axis =1 ) will return the biggest value of the two as 1, and the other as 0. i.e. [0.6 ,0.9] will give [0,1] ## this is a good format as our test_label is organised in [0,1] or [1,0] format. TP = np.where(pred.argmax(axis=1) == test_label.argmax(axis=1)) ## I will suggest to access the confidence of the pred...
_____no_output_____
CC-BY-4.0
CNN/Heatmap_demo.ipynb
ucl-exoplanets/DI-Project
Calculate and plot heatmap
num = -1 heatmap = return_heatmap(model, test[num]) plot_heatmap(heatmap, loc_map[num])
_____no_output_____
CC-BY-4.0
CNN/Heatmap_demo.ipynb
ucl-exoplanets/DI-Project
构造数据集
def create_data(): datasets = [['青年', '否', '否', '一般', '否'], ['青年', '否', '否', '好', '否'], ['青年', '是', '否', '好', '是'], ['青年', '是', '是', '一般', '是'], ['青年', '否', '否', '一般', '否'], ['中年', '否', '否', '一般', '否'], ['中年', '否', '否', '好', '...
_____no_output_____
MIT
DecisionTree/MyDecisionTree.ipynb
QYHcrossover/ML-numpy