markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
You can see that each loan now has a new key `emi`, which provides the EMI for the loan. We can extract this logic into a function so that we can use it for other files too.
def compute_emis(loans): for loan in loans: loan['emi'] = loan_emi( loan['amount'], loan['duration'], loan['rate']/12, # the CSV contains yearly rates loan['down_payment'] )
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Writing to filesNow that we have performed some processing on the data, it would be good to write the results back to a CSV file. We can create/open a file in `w` mode using `open` and write to it using the `.write` method. The string `format` method will come in handy here.
loans2 = read_csv('./data/loans2.txt') compute_emis(loans2) loans2 with open('./results/emis2.txt', 'w') as f: for loan in loans2: f.write('{},{},{},{},{}\n'.format( loan['amount'], loan['duration'], loan['rate'], loan['down_payment'], loan['em...
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Let's verify that the file was created and written to as expected.
os.listdir('results') with open('./results/emis2.txt', 'r') as f: print(f.read())
828400.0,120.0,0.11,100000.0,10034 4633400.0,240.0,0.06,0.0,33196 42900.0,90.0,0.08,8900.0,504 983000.0,16.0,0.14,0.0,67707 15230.0,48.0,0.07,4300.0,262
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Great, looks like the loan details (along with the computed EMIs) were written into the file.Let's define a generic function `write_csv` which takes a list of dictionaries and writes it to a file in CSV format. We will also include the column headers in the first line.
def write_csv(items, path): # Open the file in write mode with open(path, 'w') as f: # Return if there's nothing to write if len(items) == 0: return # Write the headers in the first line headers = list(items[0].keys()) f.write(','.join(headers) + '\n'...
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Do you understand how the function works? If now, try executing each statement by line by line or a different cell to figure out how it works. Let's try it out!
loans3 = read_csv('./data/loans3.txt') compute_emis(loans3) write_csv(loans3, './results/emis3.txt') with open('./results/emis3.txt', 'r') as f: print(f.read())
amount,duration,rate,down_payment,emi 45230.0,48.0,0.07,4300.0,981 883000.0,16.0,0.14,0.0,60819 100000.0,12.0,0.1,0.0,8792 728400.0,120.0,0.12,100000.0,9016 3637400.0,240.0,0.06,0.0,26060 82900.0,90.0,0.07,8900.0,1060 316000.0,16.0,0.13,0.0,21618 15230.0,48.0,0.08,4300.0,267 991360.0,99.0,0.08,0.0,13712 323000.0,27.0,0...
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
With just four lines of code, we can now read each downloaded file, calculate the EMIs, and write the results back to new files:
for i in range(1,4): loans = read_csv('./data/loans{}.txt'.format(i)) compute_emis(loans) write_csv(loans, './results/emis{}.txt'.format(i)) os.listdir('./data') os.listdir('./results/')
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Isn't that wonderful? Once all the functions are defined, we can calculate EMIs for thousands or even millions of loans across many files in seconds with just a few lines of code. Now we're starting to see the real power of using a programming language like Python for processing data! Using Pandas to Read and Write CS...
movies_url = 'https://gist.githubusercontent.com/aakashns/afee0a407d44bbc02321993548021af9/raw/6d7473f0ac4c54aca65fc4b06ed831b8a4840190/movies.csv' urlretrieve(movies_url, 'data/movies.csv') movies = read_csv('data/movies.csv') movies
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
As you can seen above, the movie descriptions weren't parsed properly.To read this CSV properly, we can use the `pandas` library.
!pip install pandas --upgrade --quiet import pandas as pd
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
The `pd.read_csv` function can be used to read the CSV file into a pandas data frame: a spreadsheet-like object for analyzing and processing data. We'll learn more about data frames in a future lesson.
movies_dataframe = pd.read_csv('data/movies.csv') movies_dataframe
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
A dataframe can be converted into a list of dictionaries using the `to_dict` method.
movies = movies_dataframe.to_dict('records') movies
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
If you don't pass the arguments `records`, you get a dictionary of lists instead.
movies_dict = movies_dataframe.to_dict() movies_dict
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Let's try using the `write_csv` function to write the data in `movies` back to a CSV file.
write_csv(movies, 'results/movies2.csv') # only for linux or mac !head movies2.csv
'head' is not recognized as an internal or external command, operable program or batch file.
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
As you can see above, the CSV file is not formatted properly. This can be verified by attempting to read the file using `pd.read_csv`.
pd.read_csv('results/movies2.csv')
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
To convert a list of dictionaries into a dataframe, you can use the `pd.DataFrame` constructor.
df2 = pd.DataFrame(movies) df2
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
It can now be written to a CSV file using the `.to_csv` method of a dataframe.
df2.to_csv('results/movies3.csv', index=None)
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Can you guess what the argument `index=None` does? Try removing it and observing the difference in output.
# only for linux or mac !head movies3.csv
'head' is not recognized as an internal or external command, operable program or batch file.
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
The CSV file is formatted properly. We can verify this by trying to read it back.
pd.read_csv('results/movies3.csv')
_____no_output_____
MIT
python-os-and-filesystem.ipynb
Rakib1508/python-data-science
Step1: Create the Python ScriptIn the cell below, you will need to complete the Python script and run the cell to generate the file using the magic `%%writefile` command. Your main task is to complete the following methods for the `PersonDetect` class:* `load_model`* `predict`* `draw_outputs`* `preprocess_outputs`* `p...
import numpy as np import time import os import cv2 import argparse import sys, traceback class Queue: ''' Class for dealing with queues ''' def __init__(self): self.queues=[] def add_queue(self, points): self.queues.append(points) def get_queues(self, image): for q in...
Overwriting person_coords_test.py
BSD-3-Clause
Create_Python_Script.ipynb
swastiknath/iot_ud_2
Proof-of-concept implementation of Stable Opponent Shaping ([SOS](https://openreview.net/pdf?id=SyGjjsC5tQ)). Feel free to define any n-player game and compare SOS with other learning algorithms including Naive Learning (NL), [LOLA](https://arxiv.org/pdf/1709.04326.pdf), [LA](https://openreview.net/pdf?id=SyGjjsC5tQ),...
%matplotlib inline import numpy as np import torch import matplotlib.pyplot as plt #@markdown Game definitions for matching pennies, iterated prisoner's #@markdown dilemma and tandem (define your own here). def matching_pennies(): dims = [1, 1] payout_mat_1 = torch.Tensor([[1,-1],[-1,1]]) payout_mat_2 = -p...
_____no_output_____
MIT
stable_opponent_shaping.ipynb
qxcv/stable-opponent-shaping
Setting a device
device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print(device)
cuda
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Initializing the Hyperparameters
num_classes = 10 learning_rate = 1e-3 batch_size = 1024 num_epochs = 5
_____no_output_____
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Load Pretrain Model
class Identity(nn.Module): def __init__(self): super(Identity, self).__init__() def forward(self, x): return x model = torchvision.models.vgg16(pretrained=True) # Freezing the parameters with no_grad for param in model.parameters(): param.requires_grad = False model.avgpool = Identity() # avgpool[1] = I...
_____no_output_____
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Load Data
train_dataset = datasets.CIFAR10( root="dataset/", train=True, transform=transforms.ToTensor(), download=True ) train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True) test_dataset = datasets.CIFAR10( root="dataset/", train=False, transform=transforms.ToTensor(), download=True ) t...
Files already downloaded and verified Files already downloaded and verified
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Loss & Optimizer
criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate)
_____no_output_____
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Training the Model
for epoch in tqdm(range(num_epochs)): losses = [] for ind, (data, target) in enumerate(train_loader): data = data.to(device=device) target = target.to(device=device) # forward scores = model(data) loss = criterion(scores, target) losses.append(loss) # backward optimizer.zero_g...
20%|██ | 1/5 [00:22<01:28, 22.09s/it]
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Testing the Model
def check_accuracy(loader, model): num_correct = 0 num_samples = 0 model.eval() # Let the model know that this is evaluation mode with torch.no_grad(): for x, y in loader: x = x.to(device=device) y = y.to(device=device) scores = model(x) # Which class has the max value ...
Got 6021 / 10000 with accuracy 60.21%
MIT
1/7-transfer-learning-and-finetuning-v-1/7_Transfer_Learning_and_Finetuning.ipynb
HussamCheema/pytorch
Getting started with OpenSCM-RunnerTo be written
# NBVAL_IGNORE_OUTPUT import openscm_runner # NBVAL_IGNORE_OUTPUT print(openscm_runner.__version__)
0.1.0-alpha.0
BSD-3-Clause
notebooks/getting-started.ipynb
znicholls/openscm-runner-1
Implementing dask-searchCV to speed-up gridsearchCV and including PCA in the pipeline This is just a quick notebook to see:* How much does dask-searchCV speed things up? (A: More than 3X faster for the 2.TCGA-MLexample)* Can you include PCA (and a search for n_components) in the pipeline? (A: Yes!)* Are there any limi...
%%time import os import random import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn import preprocessing from sklearn.linear_model import SGDClassifier from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV as GridSearc...
_____no_output_____
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Specify model configuration
# We're going to be building a 'TP53' classifier GENE = '7157' # TP53
_____no_output_____
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Load Data
%%time try: path = os.path.join('..', '..', 'download', 'expression-matrix.pkl') X = pd.read_pickle(path) except: path = os.path.join('..', '..', 'download', 'expression-matrix.tsv.bz2') X = pd.read_table(path, index_col=0) try: path = os.path.join('..', '..', 'download', 'mutation-matrix.pkl') ...
_____no_output_____
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
2. Evaluate dask-searchCV on original notebook 2 pipeline Median absolute deviation feature selection
def fs_mad(x, y): """ Get the median absolute deviation (MAD) for each column of x """ scores = mad(x) return scores, np.array([np.NaN]*len(scores)) # Parameter Sweep for Hyperparameters # Modifications from orginal Notebook 2. : n_jobs set to 1 instead of -1 param_grid_original = { 'select...
_____no_output_____
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Original (SciKit-Learn)
%%time cv_pipeline_original = GridSearchCV_original(estimator=pipeline_original, param_grid=param_grid_original, n_jobs=1, scoring='roc_auc') cv_pipeline_original.fit(X=X_train, y=y_train)
Wall time: 5min 9s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
dask-searchCV
%%time cv_pipeline_original_dask = GridSearchCV_dask(estimator=pipeline_original, param_grid=param_grid_original, n_jobs=1, scoring='roc_auc') cv_pipeline_original_dask.fit(X=X_train, y=y_train)
Wall time: 1min 31s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
dask-searchCV with CV=10 (10 cross-validation splits as opposed to the default 3)
%%time cv_pipeline_original_dask = GridSearchCV_dask(estimator=pipeline_original, param_grid=param_grid_original, cv=10, n_jobs=1, scoring='roc_auc') cv_pipeline_original_dask.fit(X=X_train, y=y_train)
Wall time: 6min 18s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
3. Evaluate including PCA in the pipeline Trivial/Benchmark Case [2, 4] with default cv=3
param_grid = { 'pca__n_components': [2,4], 'classify__loss': ['log'], 'classify__penalty': ['elasticnet'], 'classify__alpha': [10 ** x for x in range(-3, 1)], 'classify__l1_ratio': [0, 0.2, 0.8, 1], } pipeline = Pipeline(steps=[ ('standardize-pre', StandardScaler()), ('pca', PCA()), ('s...
Wall time: 4min 37s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Trivial/Benchmark Case [2, 4] with cv=10
param_grid = { 'pca__n_components': [2,4], 'classify__loss': ['log'], 'classify__penalty': ['elasticnet'], 'classify__alpha': [10 ** x for x in range(-3, 1)], 'classify__l1_ratio': [0, 0.2, 0.8, 1], } pipeline = Pipeline(steps=[ ('standardize-pre', StandardScaler()), ('pca', PCA()), ('s...
Wall time: 15min 30s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Long list of few components [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
param_grid = { 'pca__n_components': [2, 4, 6, 8, 10, 12, 14, 16, 18, 20], 'classify__loss': ['log'], 'classify__penalty': ['elasticnet'], 'classify__alpha': [10 ** x for x in range(-3, 1)], 'classify__l1_ratio': [0, 0.2, 0.8, 1], } pipeline = Pipeline(steps=[ ('standardize-pre', StandardScaler(...
Wall time: 21min 54s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Short list of many comonents [3000, 5000, 7000]
param_grid = { 'pca__n_components': [3000, 5000, 7000], 'classify__loss': ['log'], 'classify__penalty': ['elasticnet'], 'classify__alpha': [10 ** x for x in range(-3, 1)], 'classify__l1_ratio': [0, 0.2, 0.8, 1], } pipeline = Pipeline(steps=[ ('standardize-pre', StandardScaler()), ('pca', PC...
Wall time: 1h 1min 51s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Full Sweep [20, 30, 45, 67, 100, 150, 225, 337, 505, 757, 1135, 1702, 2553, 3829]
listOfComponents = [] component = 20 while component < 4000: listOfComponents.append(component) component = component + (0.5 * component) component = int(component) print(listOfComponents) param_grid = { 'pca__n_components': [20, 30, 45, 67, 100, 150, 225, 337, 505, 757, 1135, 1702, 2553, 3829], 'cl...
Wall time: 1h 24min 47s
BSD-3-Clause
explore/dask-searchCV/dask-searchCV.ipynb
kykosic/machine-learning
Genotype VCF file quality control This implements some recommendations from UK Biobank on [exome sequence data quality control](https://www.medrxiv.org/content/10.1101/2020.11.02.20222232v1.full-text). OverviewThe goal of this module is to perform QC on VCF files, including 1. Handling the formatting of multi-allelic...
grep Ts/Tv MWE_genotype.leftnorm.known_variant.snipsift_tstv | rev | cut -d',' -f1 | rev
2.599
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
For known variant after QC:
grep Ts/Tv MWE_genotype.leftnorm.filtered.*_variant.snipsift_tstv | rev | cut -d',' -f1 | rev
2.600
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
For novel variant before/after QC, TS/TV is not avaible since no novel_variants presented in the MWE
grep Ts/Tv MWE_genotype.leftnorm.novel_variant.snipsift_tstv | rev | cut -d',' -f1 | rev grep Ts/Tv MWE_genotype.leftnorm.filtered.novel_variant.snipsift_tstv | rev | cut -d',' -f1 | rev
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Running in Parallel
``` sos run VCF_QC.ipynb qc \ --genoFile data/mwe/mwe_genotype_list \ --dbsnp-variants data/reference_data/00-All.add_chr.variants.gz \ --reference-genome data/reference_data/GRCh38_full_analysis_set_plus_decoy_hla.noALT_noHLA_noDecoy_ERCC.fasta \ --cwd MWE/output/genotype_4 --container ./bioinfo.sif -J 3 -c ...
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Command Interface
sos run VCF_QC.ipynb -h
usage: sos run VCF_QC.ipynb [workflow_name | -t targets] [options] [workflow_options] workflow_name: Single or combined workflows defined in this script targets: One or more targets to generate options: Single-hyphen sos parameters (see "sos run -h" for details) workflow_options...
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Global parameters
[global] # input can either be 1 vcf genoFile, or a list of vcf genoFile. parameter: genoFile = paths # Workdir parameter: cwd = path # Number of threads parameter: numThreads = 1 # For cluster jobs, number commands to run per job parameter: job_size = 1 # Walltime parameter: walltime = '5h' parameter: mem = '60G' # S...
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Annotation of known and novel variantsThe known variant reference can be downloaded from https://ftp.ncbi.nlm.nih.gov/snp/organisms/human_9606_b150_GRCh38p7/VCF/00-All.vcf.gz.The procedure/rationale is [explained in this post](https://hbctraining.github.io/In-depth-NGS-Data-Analysis-Course/sessionVI/lessons/03_annotat...
[rename_chrs: provides = '{filename}.add_chr.vcf.gz'] parameter: walltime = '24h' # This file can be downloaded from https://ftp.ncbi.nlm.nih.gov/snp/organisms//human_9606_b150_GRCh38p7/VCF/00-All.vcf.gz. input: f'{filename}.vcf.gz' output: f'{_input:nn}.add_chr.vcf.gz' task: trunk_workers = 1, trunk_size = job_size, w...
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Genotype QCThis step handles multi-allelic sites and annotate variants to known and novel. We add an RS ID to variants in dbSNP. Variants without rsID are considered novel variants. For every hour it can produce ~14Gb of data, please set the --walltime parameter according to the size of your input files.
# Handel multi-allelic sites, left normalization of indels and add variant ID [qc_1 (variant preprocessing)] parameter: walltime = '24h' # Path to dbSNP variants generated previously parameter: dbsnp_variants = path # Path to fasta file for HG reference genome, eg GRCh38_full_analysis_set_plus_decoy_hla.fa parameter: r...
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
This step filter variants based on FILTER PASS, DP and QC, fraction of missing genotypes (all samples), and on HWE, for snps and indels. It will also remove monomorphic sites -- using `bcftools view -c1`.
# genotype QC [qc_2 (variant level QC)] parameter: walltime = '24h' # Maximum missingess per-variant parameter: geno_filter = 0.1 # Sample level QC - read depth (DP) to filter out SNPs below this value # Default to 10, with WES data in mind # But for WGS, setting it to 2 may be fine considering the WGS may have low DP...
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Finally we export it to PLINK 1.0 format, **without keeping allele orders**. Notice that PLINK 1.0 format does not allow for dosages. PLINK 2.0 format support it, but it is generally not supported by downstreams data analysis. In the following code block the option `--vcf-half-call m` treat half-call as missing.Also,...
[qc_3 (export to PLINK)] parameter: walltime = '24h' parameter: remove_duplicates = False output: f'{_input:nn}.bed' task: trunk_workers = 1, trunk_size = job_size, walltime = walltime, mem = mem, cores = numThreads, tags = f'{step_name}_{_output:bn}' bash: container = container, expand= "${ }", stderr = f'{_output:n}....
_____no_output_____
MIT
code/data_preprocessing/genotype/VCF_QC.ipynb
changebio/xqtl-pipeline
Load json
inst_gt_json_file = "../datasets/lvis/annotations/lvis_v0.5_"+stage+".json" data_path = '../datasets/images/'+stage+'2017' with open(inst_gt_json_file, 'r') as f: inst_gt = json.load(f) inst_gt.keys()
_____no_output_____
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Sorted by instances number
sorted_inst = sorted(inst_gt['categories'], key=lambda k: k['instance_count'], reverse=True) sorted_instance_count = [item['instance_count'] for item in sorted_inst] sorted_class_name = [item['name'] for item in sorted_inst] sorted_frequency = [item['frequency'] for item in sorted_inst] from collections import default...
_____no_output_____
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Draw distribution
import matplotlib %matplotlib inline import matplotlib.pyplot as plt from matplotlib.ticker import MultipleLocator # matplotlib.use('Agg') plt.rcParams['xtick.direction'] = 'in' plt.rcParams['ytick.direction'] = 'in' pylab.rcParams['figure.figsize'] = (20.0, 7.0) x=range(len(sorted_instance_count)) y=sorted_instanc...
_____no_output_____
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Show an example
import torch import torchvision min_keypoints_per_image = 10 def _count_visible_keypoints(anno): return sum(sum(1 for v in ann["keypoints"][2::3] if v > 0) for ann in anno) def _has_only_empty_bbox(anno): return all(any(o <= 1 for o in obj["bbox"][2:]) for obj in anno) def has_valid_annotation(anno): ...
loading annotations into memory... Done (t=15.96s) creating index... index created!
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Select one image & load annotation
get_id = 1412 ori_img, gt_anno = dataset[get_id] # Step1: get img_id, h, w """ img_id = gt_anno[0]['image_id'] img = next(item for item in inst_gt['images'] if item['id']==img_id) """ # equals to img = dataset.get_img_info(get_id) # img_name = img['file_name'] # img_path = osp.join('../dataset/coco/images/val2017',img...
_____no_output_____
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Visualization
def compute_colors_for_labels(labels): """ Simple function that adds fixed colors depending on the class """ palette = torch.tensor([2 ** 25 - 1, 2 ** 15 - 1, 2 ** 21 - 1]) colors = labels[:, None] * palette colors = (colors % 255).numpy().astype("uint8") return colors not_exhaust_label_info...
_____no_output_____
MIT
jupyter_notebook/LVIS_visualization.ipynb
JoyHuYY1412/maskxrcnn_finetune
Content with notebooksYou can also create content with Jupyter Notebooks. This means that you can includecode blocks and their outputs in your book. Markdown + notebooksAs it is markdown, you can embed images, HTML, etc into your posts!![](https://myst-parser.readthedocs.io/en/latest/_static/logo.png)You an also $add_...
from matplotlib import rcParams, cycler import matplotlib.pyplot as plt import numpy as np plt.ion() # Fixing random state for reproducibility np.random.seed(19680801) N = 10 data = [np.logspace(0, 1, 100) + np.random.randn(100) + ii for ii in range(N)] data = np.array(data).T cmap = plt.cm.coolwarm rcParams['axes.pro...
_____no_output_____
MIT
data_movies/_build/jupyter_execute/notebooks.ipynb
ekpyrosis/data_movies
Mask R-CNN - Train on Shapes DatasetThis notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too ...
import os import sys import random import math import re import time import numpy as np import cv2 import matplotlib import matplotlib.pyplot as plt from config import Config import utils import model as modellib import visualize from model import log %matplotlib inline # Root directory of the project ROOT_DIR = os...
_____no_output_____
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Configurations
class ShapesConfig(Config): """Configuration for training on the toy shapes dataset. Derives from the base Config class and overrides values specific to the toy shapes dataset. """ # Give the configuration a recognizable name NAME = "shapes" # Train on 1 GPU and 8 images per GPU. We can put...
Configurations: BACKBONE_SHAPES [[32 32] [16 16] [ 8 8] [ 4 4] [ 2 2]] BACKBONE_STRIDES [4, 8, 16, 32, 64] BATCH_SIZE 8 BBOX_STD_DEV [0.1 0.1 0.2 0.2] DETECTION_MAX_INSTANCES 100 DETECTION_MIN_CONFIDENCE 0.7 DETECTION_NMS_THRESHOLD ...
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Notebook Preferences
def get_ax(rows=1, cols=1, size=8): """Return a Matplotlib Axes array to be used in all visualizations in the notebook. Provide a central point to control graph sizes. Change the default size attribute to control the size of rendered images """ _, ax = plt.subplots(rows, cols, figsize=(...
_____no_output_____
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
DatasetCreate a synthetic datasetExtend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:* load_image()* load_mask()* image_reference()
class ShapesDataset(utils.Dataset): """Generates the shapes synthetic dataset. The dataset consists of simple shapes (triangles, squares, circles) placed randomly on a blank surface. The images are generated on the fly. No file access required. """ def load_shapes(self, count, height, width): ...
_____no_output_____
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Create Model
# import importlib # importlib.reload(model) # Create model in training mode model = modellib.MaskRCNN(mode="training", config=config, model_dir=MODEL_DIR) # Which weights to start with? init_with = "coco" # imagenet, coco, or last if init_with == "imagenet": model.load_weights(model.get...
_____no_output_____
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For t...
import keras keras.__version__ # Train the head branches # Passing layers="heads" freezes all layers except the head # layers. You can also pass a regular expression to select # which layers to train by name pattern. model.train(dataset_train, dataset_val, learning_rate=config.LEARNING_RATE, ...
_____no_output_____
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Detection
class InferenceConfig(ShapesConfig): GPU_COUNT = 1 IMAGES_PER_GPU = 1 inference_config = InferenceConfig() # Recreate the model in inference mode model = modellib.MaskRCNN(mode="inference", config=inference_config, model_dir=MODEL_DIR) # Get path to saved ...
Processing 1 images image shape: (128, 128, 3) min: 2.00000 max: 198.00000 molded_images shape: (1, 128, 128, 3) min: -121.70000 max: 94.10000 image_metas shape: (1, 12) min: 0.00000 max: 128.00000
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Evaluation
# Compute VOC-Style mAP @ IoU=0.5 # Running on 10 images. Increase for better accuracy. image_ids = np.random.choice(dataset_val.image_ids, 10) APs = [] for image_id in image_ids: # Load image and ground truth data image, image_meta, gt_class_id, gt_bbox, gt_mask =\ modellib.load_image_gt(dataset_val, i...
mAP: 1.0
MIT
notebooks/Originals/train_shapes_orig.ipynb
kbardool/Mask_RCNN_2
Duel of sorcerersYou are witnessing an epic battle between two powerful sorcerers: Gandalf and Saruman. Each sorcerer has 10 spells of variable power in their mind and they are going to throw them one after the other. The winner of the duel will be the one who wins more of those clashes between spells. Spells are repr...
# Assign spell power lists to variables gandalf = [10, 11, 13, 30, 22, 11, 10, 33, 22, 22] saruman = [23, 66, 12, 43, 12, 10, 44, 23, 12, 17] # Assign 0 to each variable that stores the victories G_win = 0 S_win = 0 draw = 0 # Execution of spell clashes for i in range(len(gandalf)): if (gandalf[i] > saruman[i]): ...
Gandalf wins: 6 Saruman wins: 4 Draws: 0
Unlicense
duel/duel.ipynb
Gayushka/data-prework-labs
Goals1. Treatment of lists2. Use of **for loop**3. Use of conditional **if-elif-else**4. Use of the functions **range(), len()**5. Print Bonus1. Spells now have a name and there is a dictionary that relates that name to a power.2. A sorcerer wins if he succeeds in winning 3 spell clashes in a row.3. Average of each o...
# 1. Spells now have a name and there is a dictionary that relates that name to a power. # variables POWER = { 'Fireball': 50, 'Lightning bolt': 40, 'Magic arrow': 10, 'Black Tentacles': 25, 'Contagion': 45 } gandalf = ['Fireball', 'Lightning bolt', 'Lightning bolt', 'Magic arrow', 'Fireball',...
_____no_output_____
Unlicense
duel/duel.ipynb
Gayushka/data-prework-labs
Basic workflow with ART for evasion attacks and defenses
%matplotlib inline import keras.backend as k from keras.applications import vgg16 from keras.preprocessing import image from keras.utils import np_utils import numpy as np import tensorflow as tf import matplotlib.pyplot as plt
Using TensorFlow backend.
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Import a standard visual recognition model Here, we will use Keras as backend and ImageNet as dataset. We load the standard ResNet50 model from Keras.
# Load model from keras.applications.resnet50 import ResNet50, preprocess_input from art.classifiers import KerasClassifier # Load model mean_imagenet = np.zeros([224, 224,3]) mean_imagenet[...,0].fill(103.939) mean_imagenet[...,1].fill(116.779) mean_imagenet[...,2].fill(123.68) model = ResNet50(weights='imagenet') cl...
_____no_output_____
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Load an ImageNet example image. You can replace this with your own images.
from os.path import join, abspath, expanduser # Get Imagenet labels path = expanduser('~/git/nemesis') with open(join(path, "imagenet/labels.txt"), "r") as f_input: class_names = eval(f_input.read()) # Get some data image_file = join(path,'test_api/clean100/n04479046_2998_224x224.jpg') image_ = image.load_img(ima...
Prediction: trench coat - confidence 1.00
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Perform evasion attack
from art.attacks import ProjectedGradientDescent # Create attacker adv = ProjectedGradientDescent(classifier, targeted=False, eps_step=1, eps=4, max_iter=1) # Generate attack image img_adv = adv.generate(img) # Evaluate it on model pred_adv = model.predict(img_adv) label_adv = np.argmax(pred_adv, axis=1)[0] confiden...
Prediction: military uniform - confidence 0.98
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Compute attack statistics We can measure the quantity of noise that was introduced to the image by the attack under different $L_p$ norms. Notive that the projected gradient descent attacks optimizes the $L_0$ norm.
import numpy as np l0 = int(99*len(np.where(np.abs(img[0] - img_adv[0])>0.5)[0]) / (224*224*3)) + 1 l1 = int(99*np.sum(np.abs(img[0] - img_adv[0])) / np.sum(np.abs(img[0]))) + 1 l2 = int(99*np.linalg.norm(img[0] - img_adv[0]) / np.linalg.norm(img[0])) + 1 linf = int(99*np.max(np.abs(img[0] - img_adv[0])) / 255) + ...
Noise L_0 norm: 99% Noise L_2 norm: 1% Noise L_inf norm: 1%
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Apply defense We apply feature squeezing as defense before putting the attack image into the classification model. This defense reduces the colour depth of the image. Here, each colour channel will be encoded on 4 bits (`bit_depth=4`). Notice that the prediction is now correct again, albeit with lower confidence value...
from art.defences import FeatureSqueezing fs = FeatureSqueezing(bit_depth=4) img_def = fs(img_adv, clip_values=(0, 255)) pred_def = model.predict(img_def) label_def = np.argmax(pred_def, axis=1)[0] confidence_def = pred_def[:, label_def][0] print('Prediction:', class_names[label_def], '- confidence {0:.2f}'.format(con...
Prediction: trench coat - confidence 0.60
MIT
notebooks/attack_defense_imagenet.ipynb
Viktour19/adversarial-robustness-toolbox
Utils Examples
# Change path to project root %cd .. # change the path and loading class import os, sys import pandas as pd import seaborn as sns !dir from amir.utils import square square(2)
_____no_output_____
MIT
examples/utils.ipynb
amirhessam88/amirhessam
STAT 453: Deep Learning (Spring 2021) Instructor: Sebastian Raschka (sraschka@wisc.edu) Course website: http://pages.stat.wisc.edu/~sraschka/teaching/stat453-ss2021/ GitHub repository: https://github.com/rasbt/stat453-deep-learning-ss21---
%load_ext watermark %watermark -a 'Sebastian Raschka' -v -p torch
Author: Sebastian Raschka Python implementation: CPython Python version : 3.9.2 IPython version : 7.21.0 torch: 1.9.0a0+d819a21
MIT
L12/code/batchsize-1024.ipynb
sum-coderepo/stat453-deep-learning-ss21
MLP with Dropout Imports
import torch import numpy as np import matplotlib.pyplot as plt # From local helper files from helper_evaluation import set_all_seeds, set_deterministic from helper_train import train_model from helper_plotting import plot_training_loss, plot_accuracy, show_examples from helper_dataset import get_dataloaders_mnist
_____no_output_____
MIT
L12/code/batchsize-1024.ipynb
sum-coderepo/stat453-deep-learning-ss21
Settings and Dataset
########################## ### SETTINGS ########################## RANDOM_SEED = 123 BATCH_SIZE = 1024 NUM_HIDDEN_1 = 75 NUM_HIDDEN_2 = 45 NUM_EPOCHS = 100 DEVICE = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') set_all_seeds(RANDOM_SEED) set_deterministic() ########################## ### MNIST DATASET...
Image batch dimensions: torch.Size([1024, 1, 28, 28]) Image label dimensions: torch.Size([1024]) Class labels of 10 examples: tensor([4, 5, 8, 9, 9, 4, 9, 9, 3, 9])
MIT
L12/code/batchsize-1024.ipynb
sum-coderepo/stat453-deep-learning-ss21
Model
class MultilayerPerceptron(torch.nn.Module): def __init__(self, num_features, num_classes, drop_proba, num_hidden_1, num_hidden_2): super().__init__() self.my_network = torch.nn.Sequential( # 1st hidden layer torch.nn.Flatten(), torch.n...
Epoch: 001/100 | Batch 0000/0052 | Loss: 2.3903 Epoch: 001/100 | Batch 0025/0052 | Loss: 1.2462 Epoch: 001/100 | Batch 0050/0052 | Loss: 0.9135 Epoch: 001/100 | Train: 87.80% | Validation: 90.57% Time elapsed: 0.04 min Epoch: 002/100 | Batch 0000/0052 | Loss: 0.8832 Epoch: 002/100 | Batch 0025/0052 | Loss: 0.7216 Epoch...
MIT
L12/code/batchsize-1024.ipynb
sum-coderepo/stat453-deep-learning-ss21
Image ClassificationIn this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be norm...
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10...
All files found!
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Explore the DataThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:* airplane* automobile* bird* cat* deer* dog* frog* hor...
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 1: Samples: 10000 Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981} First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6] Example of Image 5: Image - Min Value: 0 Max Value: 252 Image - Shape: (32, 32, 3) Label - Label Id: 1 Nam...
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Implement Preprocess Functions NormalizeIn the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ return (x - float(np.min(x))) / (float(np.max(x)) - float(np.min(x))) """ DON'T MODIFY ANYTHING IN THIS CELL TH...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
One-hot encodeJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are...
mapping = np.identity(10) def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ return mapping[x,:] """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW TH...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Randomize DataAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save itRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also...
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Build the networkFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittest...
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ shape = [None, image_shape[0], image_shape[1], image_shape[2]] # print (shape) return tf.placeholder(t...
Image Input Tests Passed. Label Input Tests Passed. Keep Prob Tests Passed.
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Convolution and Max Pooling LayerConvolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.* Apply a convolution to `x...
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple fo...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Flatten LayerImplement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [Tens...
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ # return tf.contrib.layers.flatten(x_tensor) num_features =...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Fully-Connected LayerImplement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tens...
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_out...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Output LayerImplement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/ap...
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Create Convolutional ModelImplement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:* Apply 1, 2, or 3 Convolution and Max Pool layers* Apply a Flatten Layer* Apply 1, 2, or ...
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers ...
Neural Network Built!
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Train the Neural Network Single OptimizationImplement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:* `x` for image input* `y` for labels* `keep_prob` for keep probability for dropoutThis function w...
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch Sof images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Nu...
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Show StatsImplement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy...
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
HyperparametersTune the following parameters:* Set `epochs` to the number of iterations until the network stops learning or start overfitting* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ...* Set `keep_probability` to the ...
# TODO: Tune Parameters epochs = 15 batch_size = 128 keep_probability = 0.8
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Train on a Single CIFAR-10 BatchInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_label...
Checking the Training on a Single Batch... Epoch 1, CIFAR-10 Batch 1: Loss: 2.23101 ; Validation Accuracy: 0.1848 Epoch 2, CIFAR-10 Batch 1: Loss: 2.13237 ; Validation Accuracy: 0.2082 Epoch 3, CIFAR-10 Batch 1: Loss: 1.91991 ; Validation Accuracy: 0.2856 Epoch 4, CIFAR-10 Batch 1: Loss: 1.73474 ; Valida...
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Fully Train the ModelNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batc...
Training... Epoch 1, CIFAR-10 Batch 1: Loss: 2.192 ; Validation Accuracy: 0.2484 Epoch 1, CIFAR-10 Batch 2: Loss: 1.83967 ; Validation Accuracy: 0.2702 Epoch 1, CIFAR-10 Batch 3: Loss: 1.58315 ; Validation Accuracy: 0.3176 Epoch 1, CIFAR-10 Batch 4: Loss: 1.58768 ; Validation Accuracy: 0.3832 Epoch 1, ...
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
CheckpointThe model has been saved to disk. Test ModelTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
""" DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_clas...
Testing Accuracy: 0.6847310126582279
MIT
image-classification/dlnd_image_classification.ipynb
nandakishorkoka/deep-learning-nd
Cribbage Scoring Rules and Basic Objects2021-07-03This notebook will look at the basic scoring rules and the code solutions to those rules. This series of books is really about learning and automating the games scoring. The methods should allow you to input a hand and receive the proper score for the hand with a break...
%%javascript //Disable autoscroll in the output cells IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } import random from cribbage.cards import ( Card, make_deck, display_hand, score_hand, score_hand_breakdown, )
_____no_output_____
MIT
notebooks/0 cribbage scoring and objects.ipynb
TroyWilliams3687/cribbage
CardOriginally, the Card object was defined using a standard Python Class. We have revisted this decision and use a [Dataclass][link1]. This simplifies things quite a bit and removes a lot of boiler plate. The class itself was reduced and distilled removing things that are not required. Moving methods out to separate ...
# Define a card c = Card(*'8S') # NOTE: We can use the unpacking to properly unpack the string in the Card constructor. This makes creating a card # trivial # NOTE: The order of the string must be rank then suit. It will through an exception otherwise print(f'Plain text: {c}') print(f'Fancy suit: {c....
Plain text: 8S Fancy suit: 8♠ Fancy unicode card: 🂨 8S is worth 8
MIT
notebooks/0 cribbage scoring and objects.ipynb
TroyWilliams3687/cribbage
The Card object also implements sorting. This means that you can sort a list of cards and it will sort the cards by suit, then by rank. This will group cards together by suit and then rank. Not quite in the order that most cribbage players would sort a hand.
cards = [Card(*c) for c in ('3S', '4C', '4D', '5H')] print('Unsorted: ', display_hand(cards, cool=True)) print('Sorted: ', display_hand(sorted(cards), cool=True))
Unsorted: ['3♠', '4♣', '4♦', '5♥'] Sorted: ['5♥', '4♦', '3♠', '4♣']
MIT
notebooks/0 cribbage scoring and objects.ipynb
TroyWilliams3687/cribbage
The unsorted hand would be how a cribbage player could organize the cards. The sorted hand sorts by suit then by rank. ScoringThere are quite a few methods that were defined to find all fifteens, runs, pairs, etc.. There isn't a point in going through the individual methods here. They will not be used individually. Th...
# Create a deck and take 5 cards at random. deck = make_deck() hand = list(random.sample(deck, 5)) # extract the last card as the cut card cut = hand[-1] # exclude the cut card from the list hand = hand[:-1] results = score_hand_breakdown( hand, cut, include_nibs=False, five_card_flush=False, ...
Hand = ['2♦', '7♦', '5♦', '2♣'] Cut = T♣ ----------------------- 1 Fifteens for 2 1 Pairs for 2 0 Runs for 0 Flush for 0 Nobs for 0 Nibs for 0 ----------------------- Total: 4 Fifteens: 1 - ['5♦', 'T♣'] Pairs: 1, ['2♦', '2♣']
MIT
notebooks/0 cribbage scoring and objects.ipynb
TroyWilliams3687/cribbage
Midterm Exam Problem Statement 1
# Initialization of variables studentName = "Charlie Milaya" studentNumber = 202101869 studentAge = 18 studentBirthdate = "May 20, 2003" studentAddress = "13 San Juan I, Noveleta, Cavite" studentCourse = "BSCpE 1-1" studentPreviousGWA = 96 # Student Information print("Student Information") print("Name:", studentName) ...
Student Information Name: Charlie Milaya Student Number: 202101869 Age: 18 Birthday: May 20, 2003 Address: 13 San Juan I, Noveleta, Cavite Course: BSCpE 1-1 Last Semester GWA: 96
Apache-2.0
Midterm_Exam.ipynb
milayacharlieCvSU/CPEN-21A-CPE-1-1